id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
b62d63279e08d4a28ebf29786572695e9bfa656e
DHIS 2 System Administration guide Applicable to 2.34 version DHIS 2 Documentation team June 2020 Copyright © 2006-2020 DHIS 2 Documentation team June 2020 <table> <thead> <tr> <th>Revision History</th> <th></th> </tr> </thead> <tbody> <tr> <td>2.34@1098214</td> <td>2020-06-01 11:45:28 +0200</td> </tr> </tbody> </table> **Warranty:** THIS DOCUMENT IS PROVIDED BY THE AUTHORS “AS IS” AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS MANUAL AND PRODUCTS MENTIONED HEREIN, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. **License:** Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the source of this documentation, and is available here online: [http://www.gnu.org/licenses/fdl.html](http://www.gnu.org/licenses/fdl.html) # Table of Contents 1 About this guide 2 Installation 2.1 Introduction 2.2 Server specifications 2.3 Software requirements 2.4 Server setup 2.4.1 Creating a user to run DHIS2 2.4.2 Creating the configuration directory 2.4.3 Setting server time zone and locale 2.4.4 PostgreSQL installation 2.4.5 PostgreSQL performance tuning 2.4.6 System configuration 2.4.7 Java installation 2.4.8 Tomcat and DHIS2 installation 2.4.9 Running DHIS2 2.5 File store configuration 2.6 Google service account configuration 2.7 LDAP configuration 2.8 Encryption configuration 2.9 Read replica database configuration 2.10 Web server cluster configuration 2.10.1 Clustering overview 2.10.2 DHIS 2 instance cluster configuration 2.10.3 Redis shared data store cluster configuration 2.10.4 Files folder configuration 2.10.5 Load balancer configuration 2.11 Analytics cache configuration 2.12 Monitoring 2.13 Reverse proxy configuration 2.13.1 Basic nginx setup 2.13.2 Enabling SSL with nginx 2.13.3 Enabling caching with nginx 2.13.4 Rate limiting with nginx 2.13.5 Making resources available with nginx 2.14 DHIS2 configuration reference 2.15 Application logging 2.15.1 Log files 2.15.2 Log configuration 2.16 Working with the PostgreSQL database 3 Monitoring 3.1 Introduction 3.2 Setup 3.2.1 Installing Prometheus + Grafana on Ubuntu and Debian 3.2.2 Configuring Prometheus as a service 3.2.3 Create a Prometheus service 3.2.4 Set-up Nginx reverse proxy 3.2.5 Enable reverse proxy authentication 3.2.6 Installing Grafana on Ubuntu and Debian 3.2.7 Installing Prometheus + Grafana using Docker 3.2.8 Configure Prometheus to pull metrics from one or more DHIS2 instances 3.2.9 Configure the DHIS2 exporter <table> <thead> <tr> <th>Section</th> <th>Title</th> </tr> </thead> <tbody> <tr> <td>4.1</td> <td>Introduction</td> </tr> <tr> <td>4.2</td> <td>Single Audit table</td> </tr> <tr> <td>4.3</td> <td>Audit Scope</td> </tr> <tr> <td>4.4</td> <td>Audit Type</td> </tr> <tr> <td>4.5</td> <td>Setup</td> </tr> </tbody> </table> 1 About this guide The DHIS2 documentation is a collective effort and has been developed by the development team and users. While the guide strives to be complete, there may be certain functionalities which have been omitted or which have yet to be documented. This section explains some of the conventions which are used throughout the document. DHIS2 is a browser-based application. In many cases, screenshots have been included for enhanced clarity. Shortcuts to various functionalities are displayed such as **Data element > Data element group**. The “>” symbol indicates that you should click **Data element** and then click **Data element group** in the user interface. Different styles of text have been used to highlight important parts of the text or particular types of text, such as source code. Each of the conventions used in the document are explained below. - **Note** A note contains additional information which should be considered or a reference to more information which may be helpful. - **Tip** A tip can be a useful piece of advice, such as how to perform a particular task more efficiently. - **Important** Important information should not be ignored, and usually indicates something which is required by the application. - **Caution** Information contained in these sections should be carefully considered, and if not heeded, could result in unexpected results in analysis, performance, or functionality. - **Warning** Information contained in these sections, if not heeded, could result in permanent data loss or affect the overall usability of the system. Program listings usually contain some type of computer code. They will be displayed with a shaded background and a different font. - Commands will be displayed in bold text, and represent a command which would need to be executed on the operating system or database. - Links to external web sites or cross references will be displayed in blue text, and underlined like this. 2 Installation The installation chapter provides information on how to install DHIS2 in various contexts, including online central server, offline local network, standalone application and self-contained package called DHIS2 Live. 2.1 Introduction DHIS2 runs on all platforms for which there exists a Java Runtime Environment version 8 or higher, which includes most popular operating systems such as Windows, Linux and Mac. DHIS2 runs on the PostgreSQL database system. DHIS2 is packaged as a standard Java Web Archive (WAR-file) and thus runs on any Servlet containers such as Tomcat and Jetty. The DHIS2 team recommends Ubuntu 16.04 LTS operating system, PostgreSQL database system and Tomcat Servlet container as the preferred environment for server installations. This chapter provides a guide for setting up the above technology stack. It should however be read as a guide for getting up and running and not as an exhaustive documentation for the mentioned environment. We refer to the official Ubuntu, PostgreSQL and Tomcat documentation for in-depth reading. The dhis2-tools Ubuntu package automates many of the tasks described in the guide below and is recommended for most users, especially those who are not familiar with the command line or administration of servers. It is described in detail in a separate chapter in this guide. 2.2 Server specifications DHIS2 is a database intensive application and requires that your server has an appropriate amount of RAM, number of CPU cores and a fast disk. These recommendations should be considered as rules-of-thumb and not exact measures. DHIS2 scales linearly on the amount of RAM and number of CPU cores so the more you can afford, the better the application will perform. - **RAM**: At least 1 GB memory per 1 million captured data records per month or per 1000 concurrent users. At least 4 GB for a small instance, 12 GB for a medium instance. - **CPU cores**: 4 CPU cores for a small instance, 8 CPU cores for a medium or large instance. - **Disk**: Ideally use an SSD. Otherwise use a 7200 rpm disk. Minimum read speed is 150 Mb/s, 200 Mb/s is good, 350 Mb/s or better is ideal. In terms of disk space, at least 60 GB is recommended, but will depend entirely on the amount of data which is contained in the data value tables. Analytics tables require a significant amount of disk space. Plan ahead and ensure that your server can be upgraded with more disk space as it becomes needed. 2.3 Software requirements Later DHIS2 versions require the following software versions to operate. - Java JDK or JRE version 8 or later. - Any operating system for which a Java JDK or JRE version 8 exists. - PostgreSQL database version 9.6 or later. - PostGIS database extension version 2.2 or later. - Tomcat servlet container version 8.5.50 or later, or other Servlet API 3.1 compliant servlet containers. 2.4 Server setup This section describes how to set up a server instance of DHIS2 on Ubuntu 16.04 64 bit with PostgreSQL as database system and Tomcat as Servlet container. This guide is not meant to be a step-by-step guide per se, but rather to serve as a reference to how DHIS2 can be deployed on a server. There are many possible deployment strategies, which will differ depending on the operating system and database you are using, and other factors. The term *invoke* refers to executing a given command in a terminal. For a national server the recommended configuration is a quad-core 2 Ghz processor or higher and 12 Gb RAM or higher. Note that a 64 bit operating system is required for utilizing more than 4 Gb of RAM. For this guide we assume that 8 Gb RAM is allocated for PostgreSQL and 8 GB RAM is allocated for Tomcat/JVM, and that a 64-bit operating system is used. *If you are running a different configuration please adjust the suggested values accordingly!* We recommend that the available memory is split roughly equally between the database and the JVM. Remember to leave some of the physical memory to the operating system for it to perform its tasks, for instance around 2 GB. The steps marked as *optional*, like the step for performance tuning, can be done at a later stage. 2.4.1 Creating a user to run DHIS2 You should create a dedicated user for running DHIS2. > **Important** > You should not run the DHIS2 server as a privileged user such as root. Create a new user called dhis by invoking: ``` sudo useradd -d /home/dhis -m dhis -s /bin/false ``` Then to set the password for your account invoke: ``` sudo passwd dhis ``` Make sure you set a strong password with at least 15 random characters. 2.4.2 Creating the configuration directory Start by creating a suitable directory for the DHIS2 configuration files. This directory will also be used for apps, files and log files. An example directory could be: ``` mkdir /home/dhis/ chown dhis:dhis /home/dhis/ ``` DHIS2 will look for an environment variable called `DHIS2_HOME` to locate the DHIS2 configuration directory. This directory will be referred to as `DHIS2_HOME` in this installation guide. We will define the environment variable in a later step in the installation process. ### 2.4.3 Setting server time zone and locale It may be necessary to reconfigure the time zone of the server to match the time zone of the location which the DHIS2 server will be covering. If you are using a virtual private server, the default time zone may not correspond to the time zone of your DHIS2 location. You can easily reconfigure the time zone by invoking the below and following the instructions. ```bash sudo dpkg-reconfigure tzdata ``` PostgreSQL is sensitive to locales so you might have to install your locale first. To check existing locales and install new ones (e.g. Norwegian): ```bash locale -a sudo locale-gen nb_NO.UTF-8 ``` ### 2.4.4 PostgreSQL installation Install PostgreSQL by invoking: ```bash sudo apt-get install postgresql-10 postgresql-contrib-10 postgresql-10-postgis-2.4 ``` Create a non-privileged user called `dhis` by invoking: ```bash sudo -u postgres createuser -SDRP dhis ``` Enter a secure password at the prompt. Create a database by invoking: ```bash sudo -u postgres createdb -O dhis dhis2 ``` Return to your session by invoking `exit`. You now have a PostgreSQL user called `dhis` and a database called `dhis2`. The PostGIS extension is needed for several GIS/mapping features to work. DHIS 2 will attempt to install the PostGIS extension during startup. If the DHIS 2 database user does not have permission to create extensions you can create it from the console using the `postgres` user with the following commands: ```bash sudo -u postgres psql -c "create extension postgis;" dhis2 ``` Exit the console and return to your previous user with `\q` followed by `exit`. ### 2.4.5 PostgreSQL performance tuning Tuning PostgreSQL is necessary to achieve a high-performing system but is optional in terms of getting DHIS2 to run. PostgreSQL is configured and tuned through the `postgresql.conf` file which can be edited like this: ```bash sudo nano /etc/postgresql/10/main/postgresql.conf ``` and set the following properties: ``` max_connections = 200 ``` Determines maximum number of connections which PostgreSQL will allow. ``` shared_buffers = 3200MB ``` Determines how much memory should be allocated exclusively for PostgreSQL caching. This setting controls the size of the kernel shared memory which should be reserved for PostgreSQL. Should be set to around 40% of total memory dedicated for PostgreSQL. ``` work_mem = 20MB ``` Determines the amount of memory used for internal sort and hash operations. This setting is per connection, per query so a lot of memory may be consumed if raising this too high. Setting this value correctly is essential for DHIS2 aggregation performance. ``` maintenance_work_mem = 512MB ``` Determines the amount of memory PostgreSQL can use for maintenance operations such as creating indexes, running vacuum, adding foreign keys. Increasing this value might improve performance of index creation during the analytics generation processes. ``` effective_cache_size = 8000MB ``` An estimate of how much memory is available for disk caching by the operating system (not an allocation) and isdb.no used by PostgreSQL to determine whether a query plan will fit into memory or not. Setting it to a higher value than what is really available will result in poor performance. This value should be inclusive of the shared_buffers setting. PostgreSQL has two layers of caching: The first layer uses the kernel shared memory and is controlled by the shared_buffers setting. PostgreSQL delegates the second layer to the operating system disk cache and the size of available memory can be given with the effective_cache_size setting. ``` checkpoint_completion_target = 0.8 ``` Sets the memory used for buffering during the WAL write process. Increasing this value might improve throughput in write-heavy systems. ``` synchronous_commit = off ``` Specifies whether transaction commits will wait for WAL records to be written to the disk before returning to the client or not. Setting this to off will improve performance considerably. It also implies that there is a slight delay between the transaction is reported successful to the client and it actually being safe, but the database state cannot be corrupted and this is a good alternative for performance-intensive and write-heavy systems like DHIS2. wal_writer_delay = 10000ms Specifies the delay between WAL write operations. Setting this to a high value will improve performance on write-heavy systems since potentially many write operations can be executed within a single flush to disk. random_page_cost = 1.1 SSD only. Sets the query planner’s estimate of the cost of a non-sequentially-fetched disk page. A low value will cause the system to prefer index scans over sequential scans. A low value makes sense for databases running on SSDs or being heavily cached in memory. The default value is 4.0 which is reasonable for traditional disks. max_locks_per_transaction = 96 Specifies the average number of object locks allocated for each transaction. This is set mainly to allow upgrade routines which touch a large number of tables to complete. Restart PostgreSQL by invoking the following command: ``` sudo /etc/init.d/postgresql restart ``` 2.4.6 System configuration The database connection information is provided to DHIS2 through a configuration file called `dhis.conf`. Create this file and save it in the `DHIS2_HOME` directory. As an example this location could be: ``` /home/dhis/config/ dhis.conf ``` A configuration file for PostgreSQL corresponding to the above setup has these properties: ``` # Database connection #........................................................................ # JDBC driver class connection.driver_class = org.postgresql.Driver # Database connection URL connection.url = jdbc:postgresql:dhis2 # Database username connection.username = dhis # Database password connection.password = xxxx ``` # Server #........................................................................ It is strongly recommended to enable the server.https setting and deploying DHIS 2 over the encrypted HTTPS protocol. This setting will enable e.g. secure cookies. HTTPS deployment is required when this setting is enabled. The server.base.url setting refers to the URL at which the system is accessed by end users over the network. Note that the configuration file supports environment variables. This means that you can set certain properties as environment variables and have them resolved, e.g. like this where DB_PASSWD is the name of the environment variable: ``` connection.password = ${DB_PASSWD} ``` Note that this file contains the password for your DHIS2 database in clear text so it needs to be protected from unauthorized access. To do this, invoke the following command which ensures that only the dhis user which owns the file is allowed to read it: ``` chmod 0600 dhis.conf ``` ### 2.4.7 Java installation The recommended Java JDK for DHIS 2 is OpenJDK 8. OpenJDK is licensed under the GPL license and can be run free of charge. You can install it with the following command: ``` sudo apt-get install openjdk-8-jdk ``` Verify that your installation is correct by invoking: ``` java -version ``` ### 2.4.8 Tomcat and DHIS2 installation To install the Tomcat servlet container we will utilize the Tomcat user package by invoking: ``` sudo apt-get install tomcat8-user ``` This package lets us easily create a new Tomcat instance. The instance will be created in the current directory. An appropriate location is the home directory of the dhis user: ``` cd /home/dhis/ dhis sudo tomcat8-instance-create tomcat-dhis dudo chown -R dhis:dhis tomcat-dhis/ ``` This will create an instance in a directory called `tomcat-dhis`. Note that the tomcat7-user package allows for creating any number of dhis instances if that is desired. Next edit the file `tomcat-dhis/bin/setenv.sh` and add the lines below. The first line will set the location of your Java Runtime Environment, the second will dedicate memory to Tomcat and the third will set the location for where DHIS2 will search for the `dhis.conf` configuration file. Please check that the path the Java binaries are correct as they might vary from system to system, e.g. on AMD systems you might see `/java-8-openjdk-amd64`. Note that you should adjust this to your environment: ```bash export JAVA_HOME="/usr/lib/jvm/java-1.8.0-openjdk-amd64/" export JAVA_OPTS="-Xmx7500m -Xms4000m" export DHIS2_HOME="/home/dhis/config" ``` The Tomcat configuration file is located in `tomcat-dhis/conf/server.xml`. The element which defines the connection to DHIS is the `Connector` element with port 8080. You can change the port number in the Connector element to a desired port if necessary. The `relaxedQueryChars` attribute is necessary to allow certain characters in URLs used by the DHIS2 front-end. ```xml <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" relaxedQueryChars="[]" /> ``` The next step is to download the DHIS2 WAR file and place it into the webapps directory of Tomcat. You can download the DHIS2 version 2.31 WAR release like this (replace 2.31 with your preferred version if necessary): ``` wget https://releases.dhis2.org/2.33/dhis.war ``` Alternatively, for patch releases, the folder structure is based on the patch release ID in a subfolder under the main release. E.g. you can download the DHIS2 version 2.31.1 WAR release like this (replace 2.31 with your preferred version, and 2.31.1 with you preferred patch, if necessary): ``` wget https://releases.dhis2.org/2.33/2.33.1/dhis.war ``` Move the WAR file into the Tomcat webapps directory. We want to call the WAR file ROOT.war in order to make it available at localhost directly without a context path: ``` mv dhis.war tomcat-dhis/webapps/ ROOT.war ``` DHIS2 should never be run as a privileged user. After you have modified the setenv.sh file, modify the startup script to check and verify that the script has not been invoked as root. 2.4.9 Running DHIS2 DHIS2 can now be started by invoking: ```bash sudo -u dhis tomcat-dhis/bin/startup.sh ``` Important The DHIS2 server should never be run as root or other privileged user. DHIS2 can be stopped by invoking: ```bash sudo -u dhis tomcat-dhis/bin/shutdown.sh ``` To monitor the behavior of Tomcat the log is the primary source of information. The log can be viewed with the following command: ```bash tail -f tomcat-dhis/logs/catalina.out ``` Assuming that the WAR file is called ROOT.war, you can now access your DHIS2 instance at the following URL: ``` http://localhost:8080 ``` 2.5 File store configuration DHIS2 is capable of capturing and storing files. By default, files will be stored on the local file system of the server which runs DHIS2 in a files directory under the $DHIS2_HOME external directory location. You can also configure DHIS2 to store files on cloud-based storage providers. AWS S3 is the only supported provider currently. To enable cloud-based storage you must define the following additional properties in your `dhis.conf` file: ```plaintext # File store provider. Currently 'filesystem' and 'aws-s3' are supported. filestore.provider = 'aws-s3' # Directory in external directory on local file system and bucket on AWS S3 filestore.container = files # The following configuration is applicable to cloud storage only (AWS S3) # Datacenter location. Optional but recommended for performance reasons. filestore.location = eu-west-1 # Username / Access key on AWS S3 filestore.identity = xxxx # Password / Secret key on AWS S3 (sensitive) filestore.secret = xxxx ``` This configuration is an example reflecting the defaults and should be changed to fit your needs. In other words, you can omit it entirely if you plan to use the default values. If you want to use an external provider the last block of properties needs to be defined, as well as the `provider` property is set to a supported provider (currently only AWS S3). **Note** If you've configured cloud storage in `dhis.conf`, all files you upload or the files the system generates will use cloud storage. For a production system the initial setup of the file store should be carefully considered as moving files across storage providers while keeping the integrity of the database references could be complex. Keep in mind that the contents of the file store might contain both sensitive and integral information and protecting access to the folder as well as making sure a backup plan is in place is recommended on a production implementation. **Note** AWS S3 is the only supported provider but more providers are likely to be added in the future, such as Google Cloud Store and Azure Blob Storage. Let us know if you have a use case for additional providers. ### 2.6 Google service account configuration DHIS2 can connect to various Google service APIs. For instance, the DHIS2 GIS component can utilize the Google Earth Engine API to load map layers. In order to provide API access tokens you must set up a Google service account and create a private key: - Create a Google service account. Please consult the [Google identify platform](https://developers.google.com/identity) documentation. - Visit the [Google cloud console](https://console.cloud.google.com/) and go to API Manager > Credentials > Create credentials > Service account key. Select your service account and JSON as key type and click Create. - Rename the JSON key to `dhis-google-auth.json`. After downloading the key file, put the \textit{dhis-google-auth.json} file in the DHIS2\_HOME directory (the same location as the \textit{dhis.conf} file). As an example this location could be: \begin{verbatim} /home/dhis/config/dhis-google-auth.json \end{verbatim} \section*{2.7 LDAP configuration} DHIS2 is capable of using an LDAP server for authentication of users. For LDAP authentication it is required to have a matching user in the DHIS2 database per LDAP entry. The DHIS2 user will be used to represent authorities / user roles. To set up LDAP authentication you need to configure the LDAP server URL, a manager user and an LDAP search base and search filter. This configuration should be done in the main DHIS 2 configuration file (\textit{dhis.conf}). LDAP users, or entries, are identified by distinguished names (DN from now on). An example configuration looks like this: \begin{verbatim} # LDAP server URL ldap.url = ldaps://domain.org:636 # LDAP manager entry distinguished name ldap.manager.dn = cn=johndoe,dc=domain,dc=org # LDAP manager entry password ldap.manager.password = xxxx # LDAP base search ldap.search.base = dc=domain,dc=org # LDAP search filter ldap.search.filter = (cn={0}) \end{verbatim} The LDAP configuration properties are explained below: - \textit{ldap.url}: The URL of the LDAP server \textit{for which} to authenticate against. Using SSL/encryption is strongly recommended in order to make authentication secure. As example URL is \textit{ldaps://domain.org:636}, where ldaps refers to the protocol, \textit{domain.org} refers to the domain name or IP address, and 636 refers to the port (636 is default for LDAPS). - \textit{ldap.manager.dn}: An LDAP manager user is required for binding to the LDAP server \textit{for the user authentication process}. This property refers to the DN of that entry. I.e. this is not the user which will be authenticated when logging into DHIS2, rather the user which binds to the LDAP server in order to do the authentication. - \textit{ldap.manager.password}: The password for the LDAP manager user. - \textit{ldap.search.base}: The search base, or the distinguished name of the search base object, which defines the location in the directory from which the LDAP search begins. - \textit{ldap.search.filter}: The filter for matching DNs of entries in the LDAP directory. The \{0\} variable will be substituted by the DHIS2 username, or alternatively, the LDAP identifier defined for the user with the supplied username. DHIS2 will use the supplied username / password and try to authenticate against an LDAP server entry, then look up user roles / authorities from a corresponding DHIS2 user. This implies that a user must have a matching entry in the LDAP directory as well as a DHIS2 user in order to log in. 2 Installation During authentication, DHIS2 will try to bind to the LDAP server using the configured LDAP server URL and the manager DN and password. Once the binding is done, it will search for an entry in the directory using the configured LDAP search base and search filter. The `{0}` variable in the configured filter will be substituted before applying the filter. By default, it will be substituted by the supplied username. You can also set a custom LDAP identifier on the relevant DHIS2 user account. This can be done through the DHIS2 user module user interface in the add or edit screen by setting the “LDAP identifier” property. When set, the LDAP identifier will be substituted for the `{0}` variable in the filter. This feature is useful when the LDAP common name is not suitable or cannot for some reason be used as a DHIS2 username. 2.8 Encryption configuration DHIS2 allows for encryption of data. This however requires some extra setup. To provide security to the encryption algorithm you will have to set a password in the `dhis.conf` configuration file through the `encryption.password` property: ``` encryption.password = xxxx ``` The `encryption.password` property is the password used when encrypting and decrypting data in the database. Note that the password must not be changed once it has been set and data has been encrypted as the data can then no longer be decrypted. The password must be at least 24 characters long. A mix of numbers and lower- and uppercase letters are recommended. The encryption password must be kept secret. **Important** A word of caution: It is not possible to recover encrypted data if the encryption password is lost or changed. If the password is lost, so is the encrypted data. Conversely, the encryption provides no security if the password is compromised. Hence, great consideration should be given to storing the password in a safe place. 2.9 Read replica database configuration DHIS2 allows for utilizing read only replicas of the master database (the main DHIS2 database). The purpose of read replicas is to enhance the performance of database read queries and scale out the capacity beyond the constraints of a single database. Read-heavy operations such as analytics and event queries will benefit from this. The configuration requires that you have created one or more replicated instances of the master DHIS2 database. PostgreSQL achieves this through a concept referred to as streaming replication. Configuring read replicas for PostgreSQL is not covered in this guide. Read replicas can be defined in the `dhis.conf` configuration file. You can specify up to 5 read replicas per DHIS2 instance. Each read replica is denoted with a number between 1 and 5. The JDBC connection URL must be defined per replica. The username and password can be specified; if not, the username and password for the master database will be used instead. The configuration for read replicas in `dhis.conf` looks like the below. Each replica is specified with the configuration key `readN` prefix, where N refers to the replica number. ``` # Read replica 1 configuration ``` Note that you must restart your servlet container for the changes to take effect. DHIS 2 will automatically distribute the load across the read replicas. The ordering of replicas has no significance. 2.10 Web server cluster configuration This section describes how to set up the DHIS 2 application to run in a cluster. 2.10.1 Clustering overview Clustering is a common technique for improving system scalability and availability. Clustering refers to setting up multiple web servers such as Tomcat instances and have them serve a single application. Clustering allows for scaling out an application in the sense that new servers can be added to improve performance. It also allows for high availability as the system can tolerate instances going down without making the system inaccessible to users. There are a few aspects to configure in order to run DHIS 2 in a cluster. - Each DHIS 2 instance must specify the other DHIS 2 instance members of the cluster in dhis.conf. - A Redis data store must be installed and connection information must be provided for each DHIS 2 application instance in dhis.conf. - DHIS 2 instances and servers must share the same files folder used for apps and file uploads, either through the AWS S3 cloud filestorage option or a shared network drive. - A load balancer such as nginx must be configured to distribute Web requests across the cluster instances. 2.10.2 DHIS 2 instance cluster configuration When setting up multiple Tomcat instances there is a need for making the instances aware of each other. This awareness will enable DHIS 2 to keep the local data (Hibernate) caches in sync and in a consistent state. When an update is done on one instance, the caches on the other instances must be notified so that they can be invalidated and avoid becoming stale. A DHIS 2 cluster setup is based on manual configuration of each instance. For each DHIS 2 instance one must specify the public hostname as well as the hostnames of the other instances participating in the cluster. The hostname of the server is specified using the cluster.hostname configuration property. Additional servers which participate in the cluster are specified using the cluster.members 2 Installation 2.10 Web server cluster configuration configuration property. The property expects a list of comma separated values where each value is of the format host:port. The hostname must be visible to the participating servers on the network for the clustering to work. You might have to allow incoming and outgoing connections on the configured port numbers in the firewall. The port number of the server is specified using the cluster.cache.port configuration property. The remote object port used for registry receive calls is specified using cluster.cache.remote.object.port. Specifying the port numbers is typically only useful when you have multiple cluster instances on the same server or if you need to explicitly specify the ports to match a firewall configuration. When running cluster instances on separate servers it is often appropriate to use the default port number and omit the ports configuration properties. If omitted, 4001 will be assigned as the listener port and a random free port will be assigned as the remote object port. An example setup for a cluster of two web servers is described below. For server A available at hostname 193.157.199.131 the following can be specified in dhis.conf: ```plaintext # Cluster configuration for server A # Hostname for this web server cluster.hostname = 193.157.199.131 # Ports for cache listener, can be omitted cluster.cache.port = 4001 cluster.cache.remote.object.port = 5001 # List of Host:port participating in the cluster cluster.members = 193.157.199.132:4001 ``` For server B available at hostname 193.157.199.132 the following can be specified in dhis.conf (notice how port configuration is omitted): ```plaintext # Cluster configuration for server B # Hostname for this web server cluster.hostname = 193.157.199.132 # List of servers participating in cluster cluster.members = 193.157.199.131:4001 ``` You must restart each Tomcat instance to make the changes take effect. The two instances have now been made aware of each other and DHIS 2 will ensure that their caches are kept in sync. ### 2.10.3 Redis shared data store cluster configuration In a cluster setup, a Redis instance is required and will handle shared user sessions, application cache and cluster node leadership. For optimum performance, Redis Keyspace events for generic commands and expired events need to be enabled in the Redis Server. If you are using a cloud platform-managed Redis server (like AWS ElastiCache for Redis or Azure Cache for Redis) you will have to enable keyspace event notifications using the respective cloud console interfaces. If you are setting up a standalone Redis server, enabling keyspace event notifications can be done in the redis.conf file by adding or uncommenting the following line: 2 Installation 2.10.4 Files folder configuration DHIS2 will connect to Redis if the `redis.enabled` configuration property in `dhis.conf` is set to `true` along with the following properties: - **redis.host**: Specifies where the redis server is running. Defaults to `localhost`. Mandatory. - **redis.port**: Specifies the port in which the redis server is listening. Defaults to `6379`. Optional. - **redis.password**: Specifies the authentication password. If a password is not required it can be left blank. - **redis.use.ssl**: Specifies whether the Redis server has SSL enabled. Defaults to `false`. Optional. Defaults to `false`. When Redis is enabled, DHIS2 will automatically assign one of the running instances as the leader of the cluster. The leader instance will be used to execute jobs or scheduled tasks that should be run exclusively by one instance. Optionally you can configure the `leader.time.to.live.minutes` property in `dhis.conf` to set up how frequently the leader election needs to occur. It also gives an indication of how long it would take for another instance to take over as the leader after the previous leader has become unavailable. The default value is 2 minutes. Note that assigning a leader in the cluster is only done if Redis is enabled. An example snippet of the `dhis.conf` configuration file with Redis enabled and leader election time configured is shown below. ``` # Redis Configuration redis.enabled = true redis.host = 193.158.100.111 redis.port = 6379 redis.password = <your password> redis.use.ssl = false # Optional, defaults to 2 minutes leader.time.to.live.minutes=4 ``` 2.10.4 Files folder configuration DHIS 2 will store several types of files outside the application itself, such as apps, files saved in data entry and user avatars. When deployed in a cluster, the location of these files must be shared across all instances. On the local filesystem, the location is: ``` {DHIS2_HOME}/files ``` Here, `DHIS2_HOME` refers to the location of the DHIS 2 configuration file as specified by the DHIS 2 environment variable, and `files` is the file folder immediately below. There are two ways to achieve a shared location: - Use the AWS S3 cloud file storage option. Files will be stored in an S3 bucket which is automatically shared by all DHIS 2 instances in the cluster. See the File store configuration section for guidance. - Set up a shared folder which is shared among all DHIS 2 instances and servers in the cluster. On Linux this can be achieved with NFS (Network File System) which is a distributed file system protocol. Note that only the files subfolder under DHIS2_HOME should be shared, not the parent folder. ### 2.10.5 Load balancer configuration With a cluster of Tomcat instances set up, a common approach for routing incoming web requests to the backend instances participating in the cluster is using a load balancer. A load balancer will make sure that load is distributed evenly across the cluster instances. It will also detect whether an instance becomes unavailable, and if so, stop routine requests to that instance and instead use other available instances. Load balancing can be achieved in multiple ways. A simple approach is using nginx, in which case you will define an upstream element which enumerates the location of the backend instances and later use that element in the proxy location block. ```http http { # Upstream element with sticky sessions upstream dhis_cluster { ip_hash; server 193.157.199.131:8080; server 193.157.199.132:8080; } # Proxy pass to backend servers in cluster server { listen 80; location / { proxy_pass http://dhis_cluster/; } } } ``` DHIS 2 keeps server-side state for user sessions to a limited degree. Using “sticky sessions” is a simple approach to avoid replicating the server session state by routing requests from the same client to the same server. The ip_hash directive in the upstream element ensures this. Note that several instructions have been omitted for brevity in the above example. Consult the reverse proxy section for a detailed guide. ### 2.11 Analytics cache configuration DHIS 2 supports a server-side cache for analytics API responses, used by all of the analytics web apps. This cache sits within the DHIS 2 application and hence is protected by the DHIS 2 authentication and security layer. You can configure the expiration of cached entries in seconds. To enable the cache you can define the analytics.cache.expiration property in dhis.conf. The example below enabled the cache and sets expiration to one hour. 2.12 Monitoring DHIS 2 can export Prometheus compatible metrics for monitoring DHIS2 instances. The DHIS2 monitoring infrastructure is designed to expose metrics related to the application runtime and other application-related information. Infrastructure related metrics (such as host metrics, Tomcat or Postgres) are not directly exposed by the application monitoring engine and they have to be collected separately. The metrics currently exposed by the application are: - DHIS 2 API (response time, number of calls, etc.) - JVM (Heap size, Garbage collection, etc.) - Hibernate (Queries, cache, etc) - C3P0 Database pool - Application uptime - CPU Monitoring can be enabled in dhis.conf with the following properties (default is off for all properties): ```properties monitoring.api.enabled = on monitoring.jvm.enabled = on monitoring.dbpool.enabled = on monitoring.hibernate.enabled = off monitoring.uptime.enabled = on monitoring.cpu.enabled = on ``` The recommended approach for collecting and visualizing these metrics is through Prometheus and Grafana. For more information, see the monitoring infrastructure page and the Prometheus and Grafana install chapter. 2.13 Reverse proxy configuration A reverse proxy is a proxy server that acts on behalf of a server. Using a reverse proxy in combination with a servlet container is optional but has many advantages: - Requests can be mapped and passed on to multiple servlet containers. This improves flexibility and makes it easier to run multiple instances of DHIS2 on the same server. It also makes it possible to change the internal server setup without affecting clients. - The DHIS2 application can be run as a non-root user on a port different than 80 which reduces the consequences of session hijacking. - The reverse proxy can act as a single SSL server and be configured to inspect requests for malicious content, log requests and responses and provide non-sensitive error messages which will improve security. 2.13.1 Basic nginx setup We recommend using nginx as a reverse proxy due to its low memory footprint and ease of use. To install invoke the following: ```bash sudo apt-get install nginx ``` nginx can now be started, reloaded and stopped with the following commands: ``` sudo /etc/init.d/nginx start sudo /etc/init.d/nginx reload sudo /etc/init.d/nginx stop ``` Now that we have installed nginx we will now continue to configure regular proxying of requests to our Tomcat instance, which we assume runs at `http://localhost:8080`. To configure nginx you can open the configuration file by invoking: ``` sudo nano /etc/nginx/nginx.conf ``` nginx configuration is built around a hierarchy of blocks representing http, server and location, where each block inherits settings from parent blocks. The following snippet will configure nginx to proxy pass (redirect) requests from port 80 (which is the port nginx will listen on by default) to our Tomcat instance. Include the following configuration in nginx.conf: ``` http { gzip on; # Enables compression, incl Web API content-types gzip_types "application/json;charset=utf-8" application/json "application/javascript;charset=utf-8" application/javascript text/javascript "application/xml;charset=utf-8" application/xml text/xml "text/css;charset=utf-8" text/css "text/plain;charset=utf-8" text/plain; server { listen 80; client_max_body_size 10M; # Proxy pass to servlet container location / { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_buffer_size 128k; proxy_buffers 8 128k; proxy_busy_buffers_size 256k; proxy_cookie_path ~^/(.*?) /$1; SameSite=Lax; } } } ``` You can now access your DHIS2 instance at `http://localhost`. Since the reverse proxy has been set up we can improve security by making Tomcat only listen for local connections. In `/conf/server.xml` you can add an `address` attribute with the value `localhost` to the Connector element for HTTP 1.1 like this: ``` <Connector address="localhost" protocol="HTTP/1.1" /> ``` 2.13.2 Enabling SSL with nginx In order to improve security it is recommended to configure the server running DHIS2 to communicate with clients over an encrypted connection and to identify itself to clients using a trusted certificate. This can be achieved through SSL which is a cryptographic communication protocol running on top of TCP/IP. First, install the required openssl library: ```bash sudo apt-get install openssl ``` To configure nginx to use SSL you will need a proper SSL certificate from an SSL provider. The cost of a certificate varies a lot depending on encryption strength. An affordable certificate from Rapid SSL Online should serve most purposes. To generate the CSR (certificate signing request) you can invoke the command below. When you are prompted for the Common Name, enter the fully qualified domain name for the site you are securing. ```bash openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr ``` When you have received your certificate files (.pem or .crt) you will need to place it together with the generated server.key file in a location which is reachable by nginx. A good location for this can be the same directory as where your nginx.conf file is located. Below is an nginx server block where the certificate files are named server.crt and server.key. Since SSL connections usually occur on port 443 (HTTPS) we pass requests on that port (443) on to the DHIS2 instance running on `http://localhost:8080`. The first server block will rewrite all requests connecting to port 80 and force the use of HTTPS/SSL. This is also necessary because DHIS2 is using a lot of redirects internally which must be passed on to use HTTPS. Remember to replace `<server-ip>` with the IP of your server. These blocks should replace the one from the previous section. ```bash http { gzip on; # Enables compression, incl Web API content-types gzip_types "application/json;charset=utf-8" application/json "application/javascript;charset=utf-8" application/javascript text/javascript "application/xml;charset=utf-8" application/xml text/xml "text/css;charset=utf-8" text/css "text/plain;charset=utf-8" text/plain; # HTTP server - rewrite to force use of SSL server { listen 80; rewrite ^ https://<server-url>$request_uri? permanent; } # HTTPS server server { listen 443 ssl; client_max_body_size 10M; ssl on; ssl_certificate server.crt; ssl_certificate_key server.key; ssl_session_cache shared:SSL:20m; } } ``` ssl_session_timeout 10m; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; # Proxy pass to servlet container location / { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_buffer_size 128k; proxy_buffers 8 128k; proxy_busy_buffers_size 256k; proxy_cookie_path ~^(.*) /$1; SameSite=Lax; } Note the last HTTPS header value which is required to inform the servlet container that the request is coming over HTTPS. In order for Tomcat to properly produce Location URL headers using HTTPS you also need to add two other parameters to the Connector in the Tomcat server.xml file: ``` <Connector scheme="https" proxyPort="443" /> ``` 2.13.3 Enabling caching with nginx Requests for reports, charts, maps and other analysis-related resources will often take some time to respond and might utilize a lot of server resources. In order to improve response times, reduce the load on the server and hide potential server downtime we can introduce a cache proxy in our server setup. The cached content will be stored in directory /var/cache/nginx, and up to 250 MB of storage will be allocated. Nginx will create this directory automatically. ``` http { .. proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=dhis:250m inactive=1d; } server { .. # Proxy pass to servlet container and potentially cache response location / { proxy_pass http://localhost:8080/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_buffer_size 128k; proxy_buffers 8 128k; } ``` 2.13.4 Rate limiting with nginx Certain web API calls in DHIS 2, like the analytics APIs, are compute intensive. As a result it is favorable to rate limit these APIs in order to allow all users of the system to utilize a fair share of the server resources. Rate limiting can be achieved with nginx. There are numerous approaches to achieving rate limiting and this is intended to document the nginx-based approach. The below nginx configuration will rate limit the analytics web API, and has the following elements at the http and location block level (the configuration is shortened for brevity): ```nginx http { .. limit_req_zone $binary_remote_addr zone=limit_analytics:10m rate=5r/s; } server { .. location ~ ^/api/(/)?analytics(.*)$ { limit_req zone=limit_analytics burst=20; proxy_pass http://localhost:8080/api/$1analytics$2$is_args$args; } } ``` The various elements of the configuration can be described as: - `limit_req_zone $binary_remote_addr`: Rate limiting is done per request IP. - `zone=limit_analytics:20m`: A rate limit zone for the analytics API which can hold up to 10 MB of request IP addresses. - `rate=20r/s`: Each IP is granted 5 requests per second. - `location ~ ^/api/(/)?analytics(.*)$`: Requests for the analytics API endpoint are rate limited. - `burst=20`: Bursts of up to 20 requests will be queued and serviced at a later point; additional requests will lead to a 503. For a full explanation please consult the nginx documentation. 2.13.5 Making resources available with nginx In some scenarios it is desirable to make certain resources publicly available on the Web without requiring authentication. One example is when you want to make data analysis related resources in the web API available in a Web portal. The following example will allow access to charts, maps, reports, report table and document resources through basic authentication by injecting an Authorization HTTP header into the request. It will remove the Cookie header from the request and the Set-Cookie header from the response in order to avoid changing the currently logged in user. It is recommended to create a user for this purpose given only the minimum authorities required. The Authorization value can be constructed by Base64-encoding the username appended with a colon and the password and prefix it “Basic”, more precisely “Basic base64_encode(username:password)”. It will check the HTTP method used for requests and return 405 Method Not Allowed if anything but GET is detected. It can be favorable to set up a separate domain for such public users when using this approach. This is because we don't want to change the credentials for already logged in users when they access the public resources. For instance, when your server is deployed at somedomain.com, you can set a dedicated subdomain at api.somedomain.com, and point URLs from your portal to this subdomain. ```plaintext http { server { listen 80; server_name api.somedomain.com; location ~ ^/(api/(charts|chartValues|reports|reportTables|documents|maps|organisationUnits)|dhis-web-commons/javascripts|images|dhis-web-commons-ajax-json|dhis-web-mapping|dhis-web-visualizer) { if ($request_method != GET) { return 405; } proxy_pass http://localhost:8080; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_set_header Authorization "Basic YWRtaW46ZGlzdHJpY3Q="; proxy_set_header Cookie ""; proxy_hide_header Set-Cookie; } } } ``` 2.14 DHIS2 configuration reference The following describes the full set of configuration options for the dhis.conf configuration file. The configuration file should be placed in a directory which is pointed to by a DHIS2_HOME environment variable. Note You should not attempt to use this configuration file directly, rather use it as a reference for the available configuration options. Many of the properties are optional. # Database connection for PostgreSQL [Mandatory] # connection.dialect = org.hibernate.dialect.PostgreSQLDialect connection.dialect = org.hibernate.dialect.PostgreSQLDialect globals connection.driver_class = org.postgresql.Driver globals connection.url = jdbc:postgresql:dhis2 globals connection.username = dhis globals connection.password = xxxx globals connection.schema = update globals connection.pool.max_size = 40 globals server.base.url = https://play.dhis2.org/dev globals server.https = off globals system.read_only_mode = off globals system.session.timeout = 3600 globals system.sql_view_table_protection = on globals encryption.password = xxxx # File store [Optional] # File store provider, currently 'filesystem' and 'aws-s3' are supported filestore.provider = filesystem # Directory / bucket name, folder below DHIS2_HOME on file system, 'bucket' on AWS S3 filestore.container = files # Datacenter location (not required) filestore.location = eu-west-1 # Public identity / username filestore.identity = dhis2-id # Secret key / password (sensitive) filestore.secret = xxxx # LDAP [Optional] # LDAP server URL ldap.url = ldaps://300.20.300.20:636 # LDAP manager user distinguished name ldap.manager.dn = cn=JohnDoe,ou=Country,ou=Admin,dc=hisp,dc=org # LDAP manager user password (sensitive) ldap.manager.password = xxxx # LDAP entry distinguished name search base ldap.search.base = dc=hisp,dc=org # LDAP entry distinguished name filter ldap.search.filter = (cn={0}) # Node [Optional] # Node identifier, optional, useful in clusters node.id = 'node-1' # Analytics [Optional] # Analytics server-side cache expiration in seconds analytics.cache.expiration = 3600 # System monitoring [Optional] # System monitoring URL system.monitoring.url = # System monitoring username system.monitoring.username = 2.15 Application logging This section covers application logging in DHIS 2. 2.15.1 Log files The DHIS2 application log output is directed to multiple files and locations. First, log output is sent to standard output. The Tomcat servlet container usually outputs standard output to a file under “logs”: ``` <tomcat-dir>/logs/catalina.out ``` Second, log output is written to a “logs” directory under the DHIS2 home directory as defined by the DHIS2_HOME environment variables. There is a main log file for all output, and separate log files for various background processes. The main file includes the background process logs as well. The log files are capped at 50 Mb and log content is continuously appended. ``` <DHIS2_HOME>/logs/dhis.log <DHIS2_HOME>/logs/dhis-analytics-table.log <DHIS2_HOME>/logs/dhis-data-exchange.log <DHIS2_HOME>/logs/dhis-data-sync.log ``` 2.15.2 Log configuration In order to override the default log configuration you can specify a Java system property with the name `log4j.configuration` and a value pointing to the Log4j configuration file on the classpath. If you want to point to a file on the file system (i.e. outside Tomcat) you can use the `file` prefix e.g. like this: ``` -Dlog4j.configuration=file:/home/dhis/config/log4j.properties ``` Java system properties can be set e.g. through the `JAVA_OPTS` environment variable or in the tomcat startup script. A second approach to overriding the log configuration is to specify logging properties in the `dhis.conf` configuration file. The supported properties are: ``` # Max size for log files, default is '100MB' logging.file.max_size = 250MB # Max number of rolling log archive files, default is 0 logging.file.max_archives = 2 ``` DHIS2 will eventually phase out logging to standard out / catalina.out and as a result it is recommended to rely on the logs under DHIS2_HOME. 2.16 Working with the PostgreSQL database Common operations when managing a DHIS2 instance are dumping and restoring databases. To make a dump (copy) of your database, assuming the setup from the installation section, you can invoke the following: ``` pg_dump dhis2 -U dhis -f dhis2.sql ``` The first argument (dhis2) refers to the name of the database. The second argument (dhis) refers to the database user. The last argument (dhis2.sql) is the file name of the copy. If you want to compress the file copy immediately you can do: ``` pg_dump dhis2 -U dhis | gzip > dhis2.sql.gz ``` To restore this copy on another system, you first need to create an empty database as described in the installation section. You also need to gunzip the copy if you created a compressed version. You can invoke: ``` psql -d dhis2 -U dhis -f dhis2.sql ``` 3 Monitoring 3.1 Introduction DHIS2 can export Prometheus compatible metrics for monitoring DHIS2 nodes. This section describes the steps required to install Prometheus and Grafana using a standard installation procedure (apt-get) and Docker and configure Grafana to show DHIS2 metrics. For a list of the metrics exposed by a DHIS2 instance, please refer to the monitoring guide on GitHub. 3.2 Setup The next sections describe how to set up Prometheus and Grafana and how to set up Prometheus to pull data from one or more DHIS2 instances. 3.2.1 Installing Prometheus + Grafana on Ubuntu and Debian - Download Prometheus from the official download page. - Make sure to filter for your operating system and your CPU architecture (Linux and amd64). - Make sure to select the latest stable version, and not the “rc” one, as it is not considered stable enough for now. - Download the archive, either by clicking on the link or using wget. ``` wget https://github.com/prometheus/prometheus/releases/download/v2.15.2/prometheus-2.15.2.linux-amd64.tar.gz ``` - Untar the zip ``` tar xvzf prometheus-2.15.2.linux-amd64.tar.gz ``` The archive contains many important files, but here is the main ones you need to know. - prometheus.yml: the configuration file for Prometheus. This is the file that you are going to modify in order to tweak your Prometheus server, for example to change the scraping interval or to configure custom alerts; - prometheus: the binary for your Prometheus server. This is the command that you are going to execute to launch a Prometheus instance on your Linux box; - promtool: this is a command that you can run to verify your Prometheus configuration. 3.2.2 Configuring Prometheus as a service - Create a Prometheus user with a Prometheus group. ``` useradd -rs /bin/false prometheus ``` - Move the Prometheus binaries to the local bin directory 3 Monitoring 3.2.3 Create a Prometheus service To create a Prometheus systemd service, head over to the /lib/systemd/system folder and create a new systemd file named prometheus.service. ```sh cd /lib/systemd/system touch prometheus.service ``` - Edit the newly created file, and paste the following content inside: ```ini [Unit] Description=Prometheus Wants=network-online.target After=network-online.target [Service] Type=simple User=prometheus Group=prometheus ExecStart=/usr/local/bin/prometheus \ --config.file=/etc/prometheus/prometheus.yml \ --storage.tsdb.path="/data/prometheus" \ --web.console.templates=/etc/prometheus/consoles \ --web.console.libraries=/etc/prometheus/console_libraries \ --web.listen-address=0.0.0.0:9090 \ --web.enable-admin-api Restart=always [Install] WantedBy=multi-user.target ``` - Save the file and enable the Prometheus service at startup • Create a folder in the /etc folder for Prometheus and move the console files, console libraries and the prometheus configuration file to this newly created folder. ```sh mkdir /etc/prometheus cp -R consoles/ console_libraries/ prometheus.yml /etc/prometheus ``` Create a data folder at the root directory, with a prometheus folder inside. ```sh mkdir -p data/prometheus chown -R prometheus:prometheus /data/prometheus /etc/prometheus/* ``` 3 Monitoring 3.2.4 Set-up Nginx reverse proxy Prometheus does not natively support authentication or TLS encryption. If Prometheus has to be exposed outside the boundaries of the local network, it is important to enable authentication and TLS encryption. The following steps show how to use Nginx as a reverse proxy. - Install Nginx, if not already installed ```bash apt update apt-get install nginx ``` By default, Nginx will start listening for HTTP requests in the default http port, which is 80. If there is already an Nginx instance running on the machine and you are unsure on which port it is listening on, run the following command: ```bash > lsof | grep LISTEN | grep nginx ``` The last column shows the port used by Nginx (http -> 80). By default, Nginx configuration is located in `/etc/nginx/nginx.conf` Make sure that `nginx.conf` contains the `Virtual Host Config` section ```bash ## # Virtual Host Configs ## #include /etc/nginx/conf.d/*.conf; #include /etc/nginx/sites-enabled/*; ``` - Create a new file in `/etc/nginx/conf.d` called `prometheus.conf` ```bash touch /etc/nginx/conf.d/prometheus.conf ``` - Edit the newly created file, and paste the following content inside: server { listen 1234; location / { proxy_pass http://localhost:9090;/ } } - Restart Nginx and browse to http://localhost:1234 systemctl restart nginx # in case of start-up errors journalctl -f -u nginx.service - Configure Prometheus for reverse proxying, by editing /lib/systemd/system/prometheus.service and add the following argument to the list of arguments passed to the Prometheus executable --web.external-url=https://localhost:1234 - Restart the service systemctl daemon-reload systemctl restart prometheus # in case of errors journalctl -f -u prometheus.service 3.2.5 Enable reverse proxy authentication This section shows how to configure basic authentication via the reverse proxy. If you need a different authentication mechanism (SSO, etc.) please check the relevant documentation. - Make sure that htpasswd is installed on the system apt-get install apache2-utils - Create an authentication file cd /etc/prometheus htpasswd -c .credentials admin Choose a strong password, and make sure that the pass file was correctly created. - Edit the previously created Nginx configuration file (/etc/nginx/conf.d/prometheus.conf), and add the authentication information. server { listen 1234; location / { auth_basic "Prometheus"; auth_basic_user_file /etc/prometheus/.credentials; proxy_pass http://localhost:9090/; } } • Restart Nginx systemctl restart nginx # in case of errors journalctl -f -u nginx.service • http://localhost:1234 should now prompt for username and password. 3.2.6 Installing Grafana on Ubuntu and Debian • Add a gpg key and install the OSS Grafana package from APT repo apt-get install -y apt-transport-https wget -q -O - "https://packages.grafana.com/gpg.key" | sudo apt-key add - add-apt-repository "deb https://packages.grafana.com/oss/deb stable main" apt-get update apt-get install grafana • If the system is using systemd, a new grafana-service is automatically created. Check the systemd file to gain some insight on the Grafana installation cat /usr/lib/systemd/system/grafana-server.service This file is quite important because it offers information about the newly installed Grafana instance. The file shows: The Grafana server binary is located at /usr/sbin/grafana-server. The file that defines all the environment variables is located at /etc/default/grafana-server. The configuration file is given via the CONF_FILE environment variable. The PID of the file is also determined by the PID_FILE_DIR environment variable. Logging, data, plugins and provisioning paths are given by environment variables. • Start the server systemctl start grafana-server The default login for Grafana is admin and the default password is also admin. You will be prompted to change the password on first access. - Configure Prometheus as a Grafana datasource Access to the datasources panel by clicking on Configuration > Data sources via the left menu. Click on Add a datasource Select a Prometheus data source on the next window. Configure the datasource according to the Prometheus setup (use authentication, TSL, etc.) ### 3.2.7 Installing Prometheus + Grafana using Docker This section describes how to start-up a Prometheus stack containing Prometheus and Grafana. The configuration is based on this project: https://github.com/vegasbrianc/prometheus - Clone this Github project: https://github.com/vegasbrianc/prometheus - Start the Prometheus stack using: ```bash docker stack deploy -c docker-stack.yml prom ``` The above command, may result in the following error: **This node is not a swarm manager. Use “docker swarm init” or “docker swarm join” to connect this node to swarm and try again** If that happens, you need to start Swarm. You can use the following command line: ```bash docker swarm init --advertise-addr <YOUR_IP> ``` Once this command runs successfully, you should be able to run the previous command without problems. The stack contains also a Node exporter for Docker monitoring. If you are not interested in Docker monitoring, you can comment out the relevant sections in the `docker-stack.yml` file: - node-exporter - cadvisor To stop the Prometheus stack: ```bash docker stack rm prom ``` The Prometheus configuration (prometheus.yml) file is located in the prometheus folder. ### 3.2.8 Configure Prometheus to pull metrics from one or more DHIS2 instances Prior to using Prometheus, it needs basic configuring. Thus, we need to create a configuration file named prometheus.yml #### Note The configuration file of Prometheus is written in YAML which strictly forbids to use tabs. If your file is incorrectly formatted, Prometheus will not start. Be careful when you edit it. Prometheus' configuration file is divided into three parts: global, rule_files, and scrape_configs. In the global part we can find the general configuration of Prometheus: `scrape_interval` defines how often Prometheus scrapes targets, `evaluation_interval` controls how often the software will evaluate rules. Rules are used to create new time series and for the generation of alerts. The `rule_files` block contains information of the location of any rules we want the Prometheus server to load. The last block of the configuration file is named `scrape_configs` and contains the information which resources Prometheus monitors. A simple DHIS2 Prometheus monitoring file looks like this example: ```yaml global: scrape_interval: 15s evaluation_interval: 15s scrape_configs: - job_name: 'dhis2' metrics_path: '/dhis/api/metrics' basic_auth: username: admin password: district static_configs: - targets: ['localhost:80'] ``` The global `scrape_interval` is set to 15 seconds which is enough for most use cases. In the scrape_configs part we have defined the DHIS2 exporter. The basic_auth blocks contains the credentials required to access the metrics API: consider creating an ad-hoc user only for accessing the metrics endpoint. Prometheus may or may not run on the same server as DHIS2: in the above configuration, it is assumed that Prometheus monitors only one DHIS2 instance, running on the same server as Prometheus, so we use localhost. 3.2.9 Configure the DHIS2 exporter The monitoring subsystem is disabled by default in DHIS2. Each metrics cluster has to be explicitly enabled in order for the metrics to be exported. To configure DHIS2 to export one or more metrics, check this document. 4 Audit 4.1 Introduction DHIS2 supports a new audit service which is based on Apache ActiveMQ Artemis. Artemis is used as an asynchronous messaging system by DHIS2. After an entity is saved to database, an audit message will be sent to the Artemis message consumer service. The message will then be processed in a different thread. Audit logs can be retrieved from the DHIS2 database. Currently there is no UI or API endpoint available for retrieving audit entries. 4.2 Single Audit table All audit entries will be saved into one single table named `audit` <table> <thead> <tr> <th>Column</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>auditid</td> <td>integer</td> </tr> <tr> <td>audittype</td> <td>text</td> </tr> <tr> <td>auditscope</td> <td>text</td> </tr> <tr> <td>klass</td> <td>text</td> </tr> <tr> <td>attributes</td> <td>jsonb</td> </tr> <tr> <td>data</td> <td>bytea</td> </tr> <tr> <td>createdat</td> <td>timestamp without time zone</td> </tr> <tr> <td>createdby</td> <td>text</td> </tr> <tr> <td>uid</td> <td>text</td> </tr> <tr> <td>code</td> <td>text</td> </tr> </tbody> </table> The new Audit service makes use of two new concepts: Audit Scopes and Audit Type. 4.3 Audit Scope An Audit Scope is a logical area of the application which can be audited. Currently there are three Audit Scopes: - Tracker - Metadata - Aggregate - For the Tracker Audit Scope, the audited objects are: Tracked Entity Instance, Tracked Entity Attribute Value, Enrollment, Event For the Metadata Scope, all “metadata” objects are audited. • For the Aggregate Scope, the Aggregate Data Value objects are audited. 4.4 Audit Type An Audit Type is an action that triggers an audit operation. Currently we support the following types: - READ - CREATE - UPDATE - DELETE As an example, when a new Tracked Entity Instance gets created, and if configured like so, the CREATE action is used to insert a new Audit entry in the audit db table. Note: the READ Audit Type will generate a lot of data in database and may have an impact on the performance. 4.5 Setup The audit system is automatically configured to audit for the following scopes and types: • CREATE, UPDATE, DELETE • METADATA, TRACKER, AGGREGATE No action is required to activate the audit. The audit can still be configured using the “audit matrix”. The audit matrix is driven by 3 properties in dhis.conf: - audit.metadata - audit.tracker - audit.aggregate Each property accepts a semicolon delimited list of valid Audit Types: - CREATE - UPDATE - DELETE - READ For instance, in order to only audit Tracker related object creation and deletion, the following property should be added to dhis.conf: ``` audit.tracker = CREATE;DELETE ``` In order to completely disable auditing, this is the configuration to use: ```plaintext audit.metadata = audit.tracker = audit.aggregate = ```
{"Source-Url": "https://docs.dhis2.org/2.34/en/sysadmin/dhis2_system_administration_guide.pdf", "len_cl100k_base": 16268, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 82238, "total-output-tokens": 18750, "length": "2e13", "weborganizer": {"__label__adult": 0.0003032684326171875, "__label__art_design": 0.00041604042053222656, "__label__crime_law": 0.00039458274841308594, "__label__education_jobs": 0.0014524459838867188, "__label__entertainment": 0.00011688470840454102, "__label__fashion_beauty": 0.00012922286987304688, "__label__finance_business": 0.0008230209350585938, "__label__food_dining": 0.0002033710479736328, "__label__games": 0.000941753387451172, "__label__hardware": 0.0017061233520507812, "__label__health": 0.00019097328186035156, "__label__history": 0.00031495094299316406, "__label__home_hobbies": 0.00015366077423095703, "__label__industrial": 0.0004200935363769531, "__label__literature": 0.000225067138671875, "__label__politics": 0.0002110004425048828, "__label__religion": 0.00030112266540527344, "__label__science_tech": 0.035430908203125, "__label__social_life": 0.00014543533325195312, "__label__software": 0.1788330078125, "__label__software_dev": 0.77685546875, "__label__sports_fitness": 0.00016009807586669922, "__label__transportation": 0.00026702880859375, "__label__travel": 0.0002034902572631836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71550, 0.02729]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71550, 0.4736]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71550, 0.80769]], "google_gemma-3-12b-it_contains_pii": [[0, 100, false], [100, 1527, null], [1527, 3543, null], [3543, 3855, null], [3855, 5834, null], [5834, 8709, null], [8709, 10985, null], [10985, 12938, null], [12938, 15297, null], [15297, 16984, null], [16984, 18667, null], [18667, 21025, null], [21025, 21872, null], [21872, 24513, null], [24513, 27315, null], [27315, 30449, null], [30449, 32708, null], [32708, 35435, null], [35435, 37566, null], [37566, 40045, null], [40045, 42220, null], [42220, 44434, null], [44434, 47080, null], [47080, 48981, null], [48981, 50471, null], [50471, 52907, null], [52907, 53750, null], [53750, 54922, null], [54922, 56797, null], [56797, 57641, null], [57641, 59526, null], [59526, 60882, null], [60882, 62087, null], [62087, 63296, null], [63296, 64776, null], [64776, 66428, null], [66428, 68079, null], [68079, 68772, null], [68772, 70179, null], [70179, 71405, null], [71405, 71550, null]], "google_gemma-3-12b-it_is_public_document": [[0, 100, true], [100, 1527, null], [1527, 3543, null], [3543, 3855, null], [3855, 5834, null], [5834, 8709, null], [8709, 10985, null], [10985, 12938, null], [12938, 15297, null], [15297, 16984, null], [16984, 18667, null], [18667, 21025, null], [21025, 21872, null], [21872, 24513, null], [24513, 27315, null], [27315, 30449, null], [30449, 32708, null], [32708, 35435, null], [35435, 37566, null], [37566, 40045, null], [40045, 42220, null], [42220, 44434, null], [44434, 47080, null], [47080, 48981, null], [48981, 50471, null], [50471, 52907, null], [52907, 53750, null], [53750, 54922, null], [54922, 56797, null], [56797, 57641, null], [57641, 59526, null], [59526, 60882, null], [60882, 62087, null], [62087, 63296, null], [63296, 64776, null], [64776, 66428, null], [66428, 68079, null], [68079, 68772, null], [68772, 70179, null], [70179, 71405, null], [71405, 71550, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71550, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71550, null]], "pdf_page_numbers": [[0, 100, 1], [100, 1527, 2], [1527, 3543, 3], [3543, 3855, 4], [3855, 5834, 5], [5834, 8709, 6], [8709, 10985, 7], [10985, 12938, 8], [12938, 15297, 9], [15297, 16984, 10], [16984, 18667, 11], [18667, 21025, 12], [21025, 21872, 13], [21872, 24513, 14], [24513, 27315, 15], [27315, 30449, 16], [30449, 32708, 17], [32708, 35435, 18], [35435, 37566, 19], [37566, 40045, 20], [40045, 42220, 21], [42220, 44434, 22], [44434, 47080, 23], [47080, 48981, 24], [48981, 50471, 25], [50471, 52907, 26], [52907, 53750, 27], [53750, 54922, 28], [54922, 56797, 29], [56797, 57641, 30], [57641, 59526, 31], [59526, 60882, 32], [60882, 62087, 33], [62087, 63296, 34], [63296, 64776, 35], [64776, 66428, 36], [66428, 68079, 37], [68079, 68772, 38], [68772, 70179, 39], [70179, 71405, 40], [71405, 71550, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71550, 0.02113]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0497d9e713006ae8e16cae49cd79a5f2daffb2a3
Efficient computation of polynomial explanations of Why-Not questions Nicole Bidoit, Melanie Herschel, Katerina Tzompanaki To cite this version: Nicole Bidoit, Melanie Herschel, Katerina Tzompanaki. Efficient computation of polynomial explanations of Why-Not questions. 31ème Conférence sur la Gestion de Données - Principes, Technologies et Applications - BDA 2015, Sep 2015, Île de Porquerolles, France. <hal-01182104> HAL Id: hal-01182104 https://hal.archives-ouvertes.fr/hal-01182104 Submitted on 30 Jul 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Efficient computation of polynomial explanations of Why-Not questions Nicole Bidoit Université Paris Sud / Inria 91405 Orsay Cedex, France nicole.bidoit@lri.fr Melanie Herschel Universität Stuttgart 70569 Stuttgart, Germany melanie.herschel@ipvs.uni-stuttgart.de Katerina Tzompanaki Université Paris Sud / Inria 91405 Orsay Cedex, France katerina.tzompanaki@lri.fr ABSTRACT Answering a Why-Not question consists in explaining why the result of a query does not contain some expected data, called missing answers. This paper [6] focuses on processing Why-Not questions following a query-based approach that identifies the culprit query components. The first contribution of this paper is a general definition of a Why-Not explanation by means of a polynomial. Intuitively, the polynomial provides all possible explanations to explore in order to recover the missing answers. Moreover, this formalism allows us to represent Why-Not explanations to extended relational models having for instance probabilistic or bag semantics. Computing the Why-Not explanation is a complex process and the second contribution of the paper is an algorithm that efficiently generates the aforementioned polynomials that answer Why-Not questions. An experimental evaluation demonstrates the practicality of the algorithm both in terms of efficiency and explanation quality, compared to existing algorithms. Répondre à des questions de type "Pourquoi pas" (Why Not) consiste à expliquer pourquoi certaines données appelées réponses manquantes sont absentes du résultat d’une requête. Cet article traite de questions de type "Pourquoi pas" en suivant une approche "requête", c’est-à-dire que les explications sont fournies par les combinaisons de conditions de la requête qui sont responsables de la non obtention de certaines réponses. La première contribution est une définition générale de ce qu’est l’explication d’une question "Pourquoi pas" sous la forme d’un polynôme. Intuitively, ce polynôme fournit toutes les voies à explorer pour récupérer les réponses manquantes. De plus, cette définition permet, avec le même formalisme, de s’intéresser à des extensions du modèle relationnel tel que la sémantique multi-ensembliste ou probabiliste. La deuxième contribution de cet article est liée au calcul des explications d’une question "Pourquoi pas". Un algorithme efficace est présenté, accompagné d’une validation expérimentale et d’une étude comparative. 1. INTRODUCTION The increasing load of data produced nowadays is coupled with an increasing need for complex data transformations that developers design to process these data in every-day tasks. These transformations, commonly specified declaratively, may result in unexpected outcomes. For instance, given the sample query and data of Fig. 1 on airlines and destination countries, a developer (or traveler) may wonder why Emirates does not appear in the result. Traditionally, she would repeatedly manually analyze the query to identify a possible reason, fix it, and test it to check whether the missing answer is now present or if other problems need to be fixed. Answering such Why-Not questions, that is, understanding why some data are not part of the result, is very valuable in a series of applications, such as query debugging and refinement, data verification or what-if analysis. To help developers explain missing answers, different algorithms have recently been proposed for relational and SQL queries as well as other types of queries like Top-k and reverse skyline. For relational queries, Why-Not questions can be answered for example based on the data (instance-based explanations), the query (query-based explanations), or both (hybrid explanations). We focus on solutions producing query-based explanations, as these are generally more efficient while providing sufficient information for query analysis and debugging. Essentially, a query-based explanation is a set of conditions of the query that are responsible for pruning data relevant to the missing answers. Existing methods producing query based explanations are not satisfactory, as they return different explanations for the same SQL query, and miss explanations. This is due to the fact that these algorithms are designed over query trees and thus, the explanations depend on the topology of a given tree and indeed to the ordering of the query operators in the query tree. EXAMPLE 1.1. Consider the SQL query and data of Fig. 1 and assume that a developer wants an explanation for the absence of Emirates from the query result. Fig. 2 shows two possible query trees for the query. It also shows the tree operators that WhyNot [9] (△) and NedExplain [5] (⋆) return as query-based explanations as well as the tree operators returned as part of hybrid explanations by Conseil [18, 19] (○). Each algorithm returns a different result for each of the two query trees, and in most cases, it is only a partial result as the true explanation of the missing answer is that both the selection is too strict for the tuple (Emirates, 1985, 3) from table Airline and this tuple does not find join partners in table Country. The above example clearly shows the shortcomings of existing algorithms. Indeed, the developer first has to understand and reason at the level of query trees instead of reasoning at the level of the declarative SQL query she is familiar with. Second, she always has to wonder whether the explanation is complete, or if there are other explanations that she could consider instead. To this problem, we make in this paper the following contributions: Extended formalization of Why-Not explanation polynomial. In preliminary work [3, 4], we introduced polynomials as Why-Not explanations in the context of the relational model under set semantics. A polynomial provides a complete explanation and is independent of a specific query tree representation, solving the problems illustrated by the Ex. 1.1. This paper significantly extends on the preliminary notion of a Why-Not explanation: the overall framework has been considerably simplified while the notion of Why-Not explanation is extended to be used in the context of relational model under set, bag and probabilistic semantics. This confirms the robustness of the chosen polynomial representation, making it a good fit for a unified framework for representing Why-Not explanations. Efficient Ted++ algorithm. In our preliminary work [3, 4], we presented a naive algorithm computing Why-Not explanations. We show that its runtime complexity is impractical and propose a totally new algorithm, Ted++. Ted++ is capable of efficiently computing the Why-Not explanation polynomial, based on techniques like smart data partitioning (allowing for a distributed computation) and advantageous translation of expensive database evaluations by mathematical calculations. Experimental validation. We experimentally validate both the efficiency and the effectiveness of the solutions proposed in this paper. These experiments include a comparative evaluation of existing algorithms computing query-based explanations for SQL queries (or sub-queries thereof) as well as a thorough study of Ted++ performance w.r.t. different parameters. The remainder of this paper is structured as follows. Sec. 2 covers related work. Sec. 3 defines in detail our problem setting and the Why-Not explanation polynomials. Next, we discuss in detail the Ted++ algorithm in Sec. 4. Finally, we present our experimental setup and evaluation in Sec. 5 before we conclude in Sec. 6. 2. RELATED WORK Recently, we observe the trend that growing volumes of data are processed by programs developed not only by expert developers but also by less knowledgeable users (creation of mashups, use of web services, etc.). These trends have led to the necessity of providing algorithms and tools to better understand and verify the behavior and semantics of developed data transformations, and various solutions have been proposed so far, including data lineage [12] and more generally data provenance [11], (sub-query) result inspection and explanation [2, 15, 27], query conditions relaxation [25], transformation specification simplification [23, 26], etc. The work presented in this paper falls in the category of data provenance research, and specifically explaining missing answers from query results. Due to the lack of space, the subsequent discussion focuses on this sub-problem, thus on algorithms answering Why-Not questions. Tab. 1 summarizes these algorithms, first classifying them according to the type of explanation they generate (instance-based, query-based, hybrid, ontology-based or refinement-based). The table further shows whether an algorithm supports simple Why-Not questions, i.e., questions where each condition impacts one relation only, or more complex ones. The last two columns summarize the form of a returned explanation and the class of query supported by an algorithm respectively. <table> <thead> <tr> <th>Algorithm</th> <th>Why-Not question format</th> <th>Query-based explanations</th> </tr> </thead> <tbody> <tr> <td>Why-Not [9]</td> <td>simple query operators: SPJU</td> <td></td> </tr> <tr> <td>Ted [3]/Ted++</td> <td>complex polynomial conj. queries with inequalities</td> <td>Hybrid explanations</td> </tr> <tr> <td>Conseil [18, 19]</td> <td>simple source table edits: query operators: SPJAN</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Instance-based explanations</th> <th>Refinement-based explanations</th> </tr> </thead> <tbody> <tr> <td>MA [21]</td> <td>simple source table edits: SPJ</td> </tr> <tr> <td>Artemis [20]</td> <td>complex source table edits: SPJUA</td> </tr> <tr> <td>Meliou et al. [24]</td> <td>simple causes (tuples) and responsibility conjunctive queries</td> </tr> <tr> <td>Calvanese et al. [1]</td> <td>simple additions to ABox instance &amp; conj. queries over DL: Life ontology</td> </tr> <tr> <td>Ontology-based explanations</td> <td></td> </tr> <tr> <td>Cate et al. [8]</td> <td>simple tuples concepts: conj. queries with comparisons</td> </tr> <tr> <td>ConQueR [25]</td> <td>complex rewritten query: SPJ</td> </tr> <tr> <td>Zhang et al. [13]</td> <td>simple refined query: Top-k query</td> </tr> <tr> <td>Islam et al. [22]</td> <td>simple refined query: Why-Not question</td> </tr> <tr> <td>WQRTQ [13]</td> <td>simple refined query &amp; Why-Not question: Reverse top-k query</td> </tr> <tr> <td>Chen et al. [10]</td> <td>simple refined query: Spatial keyword top-k query</td> </tr> </tbody> </table> Query-based and hybrid explanations. Why-Not [9] takes as input a simple Why-Not question and returns so-called picked query operators as query-based explanation. To determine these, the algorithm first identifies tuples in the source database that satisfy the conditions of the input Why-Not question and that are not part of the lineage [12] of any tuple in the query result. These tuples, named compatible tuples, are traced through the query operators of a query tree representation to identify which operators include them in their input but not in their output. In [9] the algorithm is shown to work for queries involving selection, projection, join, and union (SPJUA query). NedExplain [5] is very similar to Why-Not in the sense that it supports simple Why-Not questions and returns a set of picked query operators as query-based Why-Not explanation as well. However, it supports a broader range of queries, i.e., queries involving selection, projection, join, and aggregation (SPJIA queries) and unions thereof and the computation of picked operators is significantly different. In this work, we support a wider class of Why-Not questions (complex ones) and provide a new formalization of Why-Not explanation as polynomials. Conseil [18, 19] produces hybrid explanations that include an instance-based and a query-based component. The latter consists in a set of picked query operators. However, as Conseil considers both the data to be possibly incomplete and the query to be possi- ably faulty, the set of picky query operators associated to a hybrid explanation depends on the set of source edits of the same hybrid explanation. **Instance-based explanations.** Both Missing-Answers (MA) [21] and Artemis [20] compute instance-based explanations in the form of source table edits. Whereas MA returns correct explanations for simple Why-Not questions and SQL queries involving selection, projection, and join (SPI queries), Artemis supports complex Why-Not questions on a larger fraction of SQL queries (including union or aggregation, denoted SPIUA). Meliou et al. [24] study the unification of instance-based explanations of missing answers and of data present in a conjunctive query result, leveraging the concepts of causality and responsibility. Finally, Calvanese et al. [7] leverage abductive reasoning and theoretically examines the problem of computing instance-based explanations for a class of simple Why-Not questions on data represented by a DL-Lite ontology. As here the explanations are in the form of source edits, we consider these works orthogonal to query-based ones. **Ontology-based explanations.** Cate et al. [8] introduce ontology-based explanations for conjunctive queries with comparisons (≤, ≥, <, ≤). They also provide an algorithm that computes these ontology-based explanations. The algorithm as well as the explanations are completely independent of the query to be analysed and therefore, we consider this approach orthogonal to our work here. **Refinement-based explanations.** Given a set of missing answers and a SPIUA query, ConQueR [28] refines the query such that all missing answers become part of the output. Refinements of the query and of the Why-Not question have been proposed in other contexts as well like for Top-k queries (Zhang et al. [22]), reverse skyline queries (Islam et al. [21]), reverse Top-k queries (WQRTQ [13]), spatial keyword Top-k queries (Chen et al. [10]) etc. Although these approaches are generally very interesting, they do not focus on pinpointing to the user the erroneous parts of the query, but on directly refining the query. Indeed, the generated queries may contain changes that are not necessarily tied to an erroneous part of the query. For this reason, they are out of the scope of this paper. ### 3. WHY-NOT EXPLANATION POLYNOMIAL This section introduces a polynomial formalization of query-based Why-Not explanations. We assume the reader familiar with the relational model [1], and we only briefly revisit some relevant notions in Sec. 3.1 while we formalize Why-Not questions. Then, in Sec. 3.2, we define the explanation of a Why-Not question as a polynomial and in Sec. 3.3 we provide a unified general framework for Why-Not explanations in the context of probabilistic, set, and bag semantics databases. #### 3.1 Preliminaries For the moment, we limit our discussion to relational databases under set semantics. A database schema $S$ is a set of relation schemas. A relation schema $R$ is a set of attributes. We assume each attribute of $R$ qualified, i.e., of the form $R.A$ and, for the sake of simplicity we assume a unique domain $D$. $I$ denotes a database instance over $S$ and $I[R]$ denotes the instance of a relation $R$ in $S$. We assume that each database relation $R$ has a special attribute $R.Id$. <table> <thead> <tr> <th>R</th> <th>A</th> <th>B</th> <th>R.Id</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> <td>1</td> <td>$Id_1$</td> </tr> <tr> <td>2</td> <td>4</td> <td>1</td> <td>$Id_2$</td> </tr> <tr> <td>4</td> <td>5</td> <td>1</td> <td>$Id_3$</td> </tr> <tr> <td>8</td> <td>9</td> <td>1</td> <td>$Id_4$</td> </tr> </tbody> </table> <table> <thead> <tr> <th>S</th> <th>Σ</th> <th>Σ.Slit</th> <th>T</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8</td> <td>$Id_5$</td> <td></td> </tr> <tr> <td>5</td> <td>3</td> <td>$Id_6$</td> <td></td> </tr> <tr> <td>3</td> <td>9</td> <td>$Id_7$</td> <td></td> </tr> </tbody> </table> (a) Sample Instance $I$. $S = \{R,S,T\}$ $\Gamma = \{(R.B=S.D.T.C)\} \leq \{R.B=S.D,T.C \leq 9\}$ (b) Why-Not question $WN$. $Q = \{S,T,C\}$ \[\pi_{\{\Gamma\} \cap \{R,B=S.D,T.C \leq 9\}}[\{C\}]\] Fig. 3 describes our running example. Fig. 3(a) displays an instance $I$ over $S$. Fig. 3(b) displays a query $Q$ over $S$, whose conditions have been named for convenience. $R.B=T.B$ and $T.D=S.D$ are complex whereas the others are simple conditions. Moreover, $Q[I]=\{(R.B:5,S.D:4,T.C:9)\}$. In our framework, a Why-Not question specifies missing tuples from the result of a query $Q$ through a conjunctive set of conditions. As a Why-Not question is related to the result of $Q$, the conditions of the Why-Not question are restricted to the attributes of the output schema of $Q$. **Definition 3.1 (Query $Q$).** A query $Q$ is specified by the triple $(S,\Gamma,C)$, where $S$ is a database schema, $\Gamma \subseteq A(S)$ is the projection attribute set, and $C$ is a set of conditions over $A(S)$. The semantics of $Q$ are given by the relational algebra expression \[\pi_{\{\Gamma\} \cap \{R \in S[R]\}}[\{C\}].\] The result of $Q$ over a database instance $I$ of $S$ is denoted by $Q[I]$. Note here that we are not concerned about the evaluation/optimization of $Q$. **Example 3.1.** Fig. 3 describes our running example. Fig. 3(a) displays an instance $I$ over $S$. Fig. 3(b) displays a query $Q$ over $S$, whose conditions have been named for convenience. $R.B=T.B$ and $T.D=S.D$ are complex whereas the others are simple conditions. Moreover, $Q[I]=\{(R.B:5,S.D:4,T.C:9)\}$. In our framework, a Why-Not question specifies missing tuples from the result of a query $Q$ through a conjunctive set of conditions. As a Why-Not question is related to the result of $Q$, the conditions of the Why-Not question are restricted to the attributes of the output schema of $Q$. **Definition 3.2 (Why-Not question).** A Why-Not question $WN$ w.r.t. $Q$ is defined as a set of conditions over $\Gamma$. The notion of complex and simple conditions is extended to complex and simple Why-Not questions in a straightforward manner. As we said, a Why-Not question $WN$ summarizes a set of (missing) tuples that the user expected to find in the query result. To be able to obtain these missing tuples as query results, data from the input relation instances that satisfy $WN$ need to be combined by the query. The candidate data combinations are what we call compatible tuples and these can be computed using $WN$ as in Def. 3.3. **Definition 3.3 (Compatible tuples).** Consider the query $Q_{WN}(S,A(S),WN)$, where $S$ is the input schema of $Q$. The set $CT$ of compatible tuples is the result of the query $Q_{WN}$ over $I$. We further introduce the notion of a well founded Why-Not question. Intuitively, a Why-Not question can be answered under a query-based approach, only if some data in \( Z \) match the Why-Not question (otherwise instance-based explanations should be sought for). Moreover, a Why-Not question is meaningful if it tracks data not already returned by the query. **Definition 3.4 (Well founded Why-Not question).** A Why-Not question \( WN \) is said to be well founded if \( CT \neq \emptyset \) and \( \pi_T[CT] \cap Q[Z] = \emptyset \). **Example 3.2.** Continuing Ex. 3.1, we may wonder why there is not a tuple for which \( R.B < S.D \) and \( T.C \leq 9 \). According to Def. 3.2, this Why-Not question is the conjunction of the conditions \( R.B < S.D \) and \( T.C \leq 9 \) (Fig. 3(c)). Since \( R.B < S.D \) is a complex condition, \( WN \) is a complex Why-Not question. The compatible tuples set \( CT \) is the result of the query \( Q_{WN} = \sigma_{R.B < S.D \land T.C \leq 9}[R \times S \times T] \), which contains 12 tuples. For example, one compatible tuple is \( \tau_1 = (R.Id:1, R.A:1, R.B:3, S.Id:5, S.D:4, S.E:8, T.Id:8, T.B:3, T.C:4, T.D:5) \). Each tuple in \( CT \) could have led to a missing tuple, if it was not eliminated by some of the query’s conditions. Thus, explaining \( WN \) amounts to identifying these blocking query conditions. ### 3.2 The Why-Not Explanation Polynomial To build the query-based explanation of \( WN \), we start by specifying what explains that a compatible tuple \( \tau \) did not lead to an answer. Intuitively, the explanation consists of the query conditions pruning \( \tau \). **Definition 3.5 (Explanation for \( \tau \)).** Let \( \tau \in CT \) be a compatible tuple w.r.t. \( WN \), given \( Q \). Then, the explanation for \( \tau \) is the set of conditions \( E_\tau = \{ c | \exists \tau \in C \text{ and } \tau \neq \emptyset \} \). **Example 3.3.** Reconsider the compatible tuple \( \tau_1 \) in Ex. 3.2. The conditions of \( Q \) (see Ex. 3.1), not satisfied by \( \tau_1 \), are: \( c_1, c_3 \), and \( c_4 \). So, the explanation for \( \tau_1 \) is \( E_{\tau_1} = \{ c_1, c_3, c_4 \} \). Having defined the explanation wrt one compatible tuple, the explanation for \( WN \) is obtained by simply summing up the explanations for all the compatible tuples in \( CT \). This leads to an expression of the form \( \sum_{\tau \in CT} \prod_{c \in E_\tau} c \). We justify modelling the explanation of \( \tau \) with a product (meaning conjunction) of conditions by the fact that in order for \( \tau \) to ‘survive’ the query conditions and give rise to a missing tuple, every single condition in the explanation must be ‘repaired’. The sum (meaning disjunction) of the products for each \( \tau \in CT \) implies that if any explanation is ‘correctly repaired’, the associated \( \tau \) will produce a missing tuple. Of course, several compatible tuples can share the same explanation. Thus, the final Why-Not explanation is a polynomial having as variables the query conditions and as integer coefficients the number of compatible tuples sharing an explanation. **Definition 3.6 (Why-Not explanation).** With the same assumption as before, the Why-Not explanation for \( WN \) is defined as the polynomial \[ PEX = \sum_{\tau \in CT} \prod_{c \in E_\tau} c \] where \( E = 2^C \), \( \text{cof}(E) \in \{0, \ldots, |CT|\} \) is the number of tuples in \( CT \) sharing \( E \) as an explanation, and \( \sum_{\tau \in CT} \text{cof}(E) = |CT| \). Intuitively, \( E \) contains all possible explanations, i.e., condition combinations and each of these explanations prunes from zero to at most \( |CT| \) compatible tuples. Each compatible tuple is pruned by exactly one condition combination, which is why the sum of all coefficients is equal to the number of compatible tuples. We mentioned before that each term of the polynomial provides an alternative explanation to be explored by the user who wishes to recover some missing tuples. Additionally, the polynomial as in Def. 3.6 offers through its coefficients some useful hints to users interested in the number of recoverable tuples. More precisely, by isolating an explanation \( E \) to repair, we can obtain an upper bound for the number of compatible tuples that can be recovered. The upper bound is calculated as the sum of the coefficients of all the explanations that are sub-sets of (the set of conditions of) \( E \), because when \( E \) is changed it is likely that some sub-combinations are also repaired. **Example 3.4.** In Ex. 3.3 we found the explanation \( \{c_1, c_3, c_4\} \), leading to the polynomial term \( c_1 \ast c_3 \ast c_4 \). Taking into consideration all the 12 compatible tuples of our example, we obtain the following PEX polynomial: \( 2 \ast c_1 \ast c_2 \ast c_3 \ast c_4 \ast c_1 \ast c_2 \ast c_3 \ast c_4 \ast c_2 \ast c_3 \ast c_4 \ast c_1 \ast c_2 \ast c_3 \ast c_4 \). In the polynomial, each addend, composed by a coefficient and an explanation, captures a way to obtain missing tuples. For instance, the explanation \( c_1 \ast c_2 \ast c_4 \) indicates that we may recover some missing answers if \( c_1 \) and \( c_2 \) and \( c_4 \) are changed. Then, the sum of its coefficient 4 and the coefficient 2 of the explanation \( c_1 \ast c_4 \{\{c_1, c_4]\leq\{c_1, c_2, c_4\}\} \) indicates that we can recover from 0 to 6 tuples. As the presentation of the polynomial per se may sometimes be cumbersome, and thus not easy for a user to manipulate, some post-processing steps could be applied. For example, depending on the application or needs, only a subset of the explanations could be returned like minimum explanations (i.e., for which no sub-explanations exist), or those explanations supposed to recover a specific number of tuples, or having specific condition types etc. ### 3.3 Extension: Bag & Probabilistic Semantics So far, we have considered databases under set semantics only. In this section, we discuss how the definition of Why-Not explanation (Def. 3.6) extends to settings with conjunctive queries over bag semantics and probabilistic databases. \( K \)-relations, as introduced in [14], capture in a unified manner relations under probabilistic, bag or set semantics. Briefly, a \( K \)-relation maps tuples to elements of a set \( K \), that is \( K \)-relation tuples are annotated with elements in \( K \). In our case, we consider that \( K \) is a set of tuple identifiers, similar to our special attribute \( R.Id \) in Sec. 3.1. In what follows, we use the notion of how-provenance of tuples in the result of a query \( Q \). The how-provenance of \( \in E(Q(T)) \) is modelled as the polynomial obtained by the positive algebra on \( K \)-relations, as proposed in [14]. Briefly, each \( t \) is annotated with a polynomial whose variables are tuple identifiers and coefficients are natural numbers. Following [14]’s algebra, if \( t \) results from a selection operator on a tuple \( t_1 \) annotated with \( Id_1 \), then \( t \) is also annotated with \( Id_1 \). If \( t \) is the result of the join of \( t_1 \) and \( t_2 \), then \( t \) is annotated with \( Id_1Id_2 \). We compute the generalized Why-Not explanation polynomial as follows. Firstly, we compute the how-provenance for compatible tuples in \( CT \) by evaluation of the query \( Q_{WN} \) (Def. 3.3) wrt the algebra in [14]. Recall that \( Q_{WN} \) contains only selection and join operators. Thus, we assume that each compatible tuple \( \tau \) in \( CT \) is annotated with its how-provenance polynomial, denoted by \( \eta_\tau \). In a second step, we associate the expressions of how and why-not provenance. In order to do this, for each compatible tuple $\tau$ in $CT$, we combine its how-provenance polynomial $\eta_{\tau}$ with its explanation $E_{\tau}$ (Def. 3.5). So, each $\tau$ is annotated with the expression $\eta_{\tau}E_{\tau}$. Finally, as before, we sum the combined expressions for all compatible tuples. The result is the generalized Why-Not explanation $PEX_{gen} = \sum_{\tau \in CT} \eta_{\tau}E_{\tau}$. We now briefly comment on how $PEX_{gen}$ is instantiated to deal either with the set, bag or probabilistic semantics. Indeed, the ‘specialization’ of $PEX_{gen}$ relies on the interpretation of the elements in $K$, that is on a function $Eval$ from $K$ to some set $L$. For the set semantics, each tuple in a relation occurs only once. This results in choosing $L$ to be the singleton $\{1\}$ and mapping each tuple identifier to 1. It is then quite obvious to note, for the set semantics, that $PEX_{gen} = PEX$ (Def. 3.6). In the same spirit, for bag semantics, $L$ is chosen as the set of natural numbers $N$ and each tuple identifier is mapped to its number of occurrences. Finally, for probabilistic databases, $L$ is chosen as the interval $[0,1]$ and each tuple identifier is mapped to its occurrence probability. Thus, the generalized definition of Why-Not explanation is parameterized by the mapping $Eval$ of the annotations (elements in $K$) in the set $L$. **DEFINITION 3.7. (Generalized Why-Not explanation polynomial)** Given a query $Q$ over a database schema $\mathcal{S}$ of $K$-relations, the generalized Why-Not explanation polynomial for $WN$ is $$PEX_{gen} = \sum_{E \in \mathcal{S}} \sum_{\tau \in CT \text{ s.t. } \tau \subseteq E} Eval(\eta_{\tau})E$$ where $E$ is displayed as a power set of $\mathcal{S}$, $\eta_{\tau}$ is the how-provenance of $\tau$, and $Eval: K \rightarrow L$ evaluates the elements of $K$ to values in $L$. The specializations of $PEX_{gen}$ share the same explanations (terms of the polynomial), capturing the same ‘erroneous’ parts of the query. However, the coefficients are interpreted wrt the how-provenance. 4. TED++ ALGORITHM In [3], we have introduced Ted, a naive algorithm that implements the definitions of [3] for Why-Not explanations in a straightforward manner. Briefly, Ted enumerates the set of compatible tuples $CT$ by executing the query $Q_{WN}$ (Def. 3.3). Then, it computes the explanation for each compatible tuple in $CT$, which leads to the computation of the final Why-Not explanation. However, both of these steps make Ted computationally prohibitive. Not only is the computation of $CT$ time and space consuming as it often requires cross product executions, but also the iteration over this (potentially very large) set is time consuming. Ted’s time complexity is $O(\eta(S))$, $n_{max}(\mid I_R \mid)$, $\mathcal{R} \in \mathcal{S}$. As experiments in Sec. 5 confirm, this complexity renders Ted of no practical interest. To overcome the poor performance of Ted, we propose Ted++. The main feature of Ted++ is to completely avoid enumerating and iterating over the set $CT$, thus it significantly reduces both space and time consumption. Instead, Ted++ opts for (i) iterating over the space of possible explanations, which is expected to be much smaller, (ii) computing partial sets of passing compatible tuples, and (iii) computing the number of eliminated compatible tuples for each explanation. Intuitively, passing tuples w.r.t. an explanation are tuples satisfying the conditions of the explanation. Finally, the polynomial is computed based on mathematical calculations. Theorem 4.1 states that Ted++ is sound and complete w.r.t. Def. 3.6. **THEOREM 4.1.** Given a query $q$, a Why-Not question $WN$ and an input instance $I$, Ted++ computes exactly $PEX$. Alg. 1 provides an outline of Ted++. The input includes the query $Q = (\mathcal{S}, \Gamma, C)$, the Why-Not question $WN$ and the input instance $I$. Firstly, in Alg. 1, line 1, all potential explanations (combinations of the conditions in $\mathcal{C}$) are enumerated ($E = 2^C$). The remaining steps, discussed in the next subsections, aim at computing the coefficient of each explanation. To illustrate the concepts introduced in the detailed discussions, we will rely on our running example, for which Fig. 4 shows all relevant intermediate results. It should be read bottom-up. For convenience, in our examples, we use subscript $i$ instead of $c_i$. 4.1 Partial Compatible Tuples Computation Using the conditions in $WN$, Ted++ partitions the database schema $\mathcal{S}$ (Alg. 1 line 2) into components of relations connected by the conditions in $WN$ (Def. 4.1). **DEFINITION 4.1. (Valid Partitioning of $\mathcal{S}$).** Given $WN$, the partitioning of a database schema $\mathcal{S}$ into $k$ partitions, denoted $\mathcal{P} = \{Part_1, \ldots, Part_k\}$, is valid if each $Part_i$, $i \in \{1, \ldots, k\}$ is minimal w.r.t. the following property: - if $R \in Part_i$ and $R' \in \mathcal{S}$ s.t. $\exists c \in WN$ with $A(c) \cap \triangleleft (R') \neq \emptyset$ and $A(c) \cap \triangleleft (R) \neq \emptyset$ then $\not{\exists R'} \in Part_i$. The partitioning of $\mathcal{S}$ allows for handling compatible tuples more efficiently, by ‘cutting’ them in distinct meaningful ‘chunks’, avoiding combining chunks over distincts partitions through cross product. We refer to the chunks of compatible tuples as partial compatible tuples and group them in sets depending on the partition they belong to. The set $CT_{\mid Part_i}$ of partial compatible tuples wrt $Part_i \in \mathcal{P}$ is obtained by evaluating the query $Q_{Part_i} = (Part_i, A(\Pi Part_i), WN_{\mid Part_i})$ over $I_{\mid Part_i}$ (Alg. 1, line 4). $WN_{\mid Part_i}$ and $I_{\mid Part_i}$ denote the restriction of $WN$ and $I$ over the relations in $Part_i$, respectively. **EXAMPLE 4.1.** The valid partitioning of $\mathcal{S}$ is $Part_1 = \{R, S\}$ (because of the condition $R.B < S.D$) and $Part_2 = \{T\}$. The sets of partial compatible tuples $CT_{\mid Part_1}$ and $CT_{\mid Part_2}$ are given in the bottom line of Fig. 4. It is easy to prove that the valid partitioning of $\mathcal{S}$ is unique and that the set $CT$ can be computed from the individual $CT_{\mid Part_i}$. **LEMMA 4.1.** Let $\mathcal{P}$ be the valid partitioning of $\mathcal{S}$. Then, $CT = \bigcup_{Part_i \in \mathcal{P}} CT_{\mid Part_i}$. Indeed, Lemma 4.1 makes it clear how $CT$ is computed from partial compatible tuples. Our algorithm is designed in a way that avoids computing $CT$ and relies on the computation of $CT_{\mid Part_i}$ only. Algorithm 2: coefficientEstimation Input: $E$ explanations space, $P$ valid partitioning of $S$ 1. for $E \in E$ "access in ascending size order" do 2. \textbf{Compute} $\part_2$: 3. if $|E| = 1$ then 4. materialize $V_E$; 5. $\beta_E \leftarrow \text{Eq. (B)}$; 6. else 7. if $\alpha_{\text{subcom}} \neq \emptyset$ then 8. $(E_1, E_2) \leftarrow \text{subCombinations}(E)$; 9. $\Gamma_2 \leftarrow \Gamma_1 \cup \{E_2\}$, is the output schema of $V_{E_1}$ 10. if $\Gamma_1 \neq \emptyset$ then 11. $V_E \leftarrow V_{E_1} \cap \Gamma_2$; 12. else 13. $\mid V_E \mid = |V_{E_1}| + |V_{E_2}|$ 14. else 15. $\mid V_E \mid = |V_{E_1} \cap V_{E_2}|$ 16. end 17. $\beta_E \leftarrow \prod_{\part \in \part_2} |CT|_{\part}|$;\text{Eq. (E)}$ 18. \alpha_E \leftarrow \text{Eq. (A)}$ Next, we compute the number of compatible tuples eliminated by each possible explanation, starting with the partial compatible tuple sets previously defined. These numbers approximate the coefficient of the explanations in the polynomial. Since from this point on, we are only handling (partial) compatible tuples, we omit the word ‘compatible’ to lighten the discussion. 4.2 Polynomial Coefficient Estimation Each set $E$ in the powerset $\mathcal{E}$ is in fact a potential explanation that is further processed. In order to do that, we associate $E$ with (i) the set of partitions $\part$ on which $E$ is defined, (ii) the view definition $V_E$ meant to store the passing partial tuples wrt $E$, and, (iii) the number $\alpha_E$ of tuples eliminated by $E$. Alg. 2 describes how we process $E$ in ascending order of explanation size, in order to compute $\alpha_E$. Each step deals with explanations of size $s$, reusing results from previous steps avoiding cross product computations through mathematical calculations. We first determine the set of partitions for an explanation $E$ as $\part = \bigcup_{E \in \mathcal{E}} \{\part_E\}$, where $\part_E$ contains at least one relation over which $E$ is specified. EXAMPLE 4.2. Consider $\mathcal{E}_1 = \{c_1\}$ and $\mathcal{E}_2 = \{c_2\}$. From Fig. 3(b) and the partitions in Fig. 4, we can see that $c_1$ impacts only $\part_1$, whereas $c_2$ spans over $\part_1$ and $\part_2$. Hence, $\part_{\mathcal{E}_1} = \{\part_1\}$ and $\part_{\mathcal{E}_2} = \{\part_1, \part_2\}$. Then, $E = \{c_1, c_2\}$ is impacted by the union of $\part_{\mathcal{E}_1}$ and $\part_{\mathcal{E}_2}$, thus $\part_E = \{\part_1, \part_2\}$. We use Eq. (A) to calculate the number $\alpha_E$ of eliminated tuples, using the number $\beta_E$ of eliminated partial tuples and the cardinality of the partitions not in $\part_E$. Intuitively, this formula extends the partial tuples to “full” tuples over $CT$’s schema. $$\alpha_E = \beta_E \cdot \prod_{\part \in \part_E, \part \neq \part_E} |CT|_{\part}|,$$ where $\part_E = \mathcal{P} \setminus \part_E$. Note that when $\part_E$ is empty, we abusively consider that $\prod_{\emptyset} = 1$. The presentation now focuses on calculating $\beta_E$. Two cases arise depending on the size of $\mathcal{E}$. Atomic explanations. We start with explanations $E$ containing only one condition $c$ (Algorithm 2 lines 3-5), which we call atomic explanations. To find the number of eliminated partial tuples $\beta_E$, we firstly compute the set of passing partial tuples w.r.t. $c$, which we store in the view $V_c$. $$\beta_E = \prod_{\part \in \part_E} |CT|_{\part} | - |V_c|$$ EXAMPLE 4.3. For $c_2$, we have $\part_{c_2} = \{\part_1, \part_2\}$, so $V_{c_2} = \pi_{R_{id}, R_{id}, \text{ext}} \alpha_{\text{subcom}} (\{CT|_{\part_1}\} \cup \{CT|_{\part_2}\})$, where $\alpha_{\text{subcom}} = \sum_{i=1}^{n} (-1)^{i+1} \big| \bigcup_{j \in J} V_{c_j} \big|$. Hence, $\beta_E = |\bigcup_{c_j \in \{c_1, \ldots \} V_{c_j}|$. By the well-known DeMorgan law [29], we have $\beta_E = |\bigcup_{c_j \in \{c_1, \ldots \} \setminus V_{c_j}|$, which spares us from computing the complements of $V_{c_j}$. To compute the cardinality of the union among $V_{c_j}$, we rely on the Principle of Inclusion and Exclusion for counting [16]: $$\big| \bigcup_{i=1}^{n} V_{c_i} \big| = \sum_{j \in J} (-1)^{|J|-1} \big| \bigcap_{j \in J} V_{c_j} \big|$$ We further rewrite the previous formula to re-use results obtained for sub-combinations of $\mathcal{E}$, obtaining Eq. (C). $$\big| \bigcup_{i=1}^{n} V_{c_i} \big| = \big| \bigcup_{i=1}^{n-1} V_{c_i} \big| + |V_{c_n}| + \sum_{j=1}^{n-1} (-1)^{|J|} \big| \bigcap_{j \in J} V_{c_j} \big|$$ At this point, we have all the necessary data to compute $\beta_E$. However, so far we assumed that the conditions in $E$ have the same schema. In the general case, we have to “extend” the schema of a view $V_c$ to the one of $V_E$, in order to have well-defined set operations. The cardinality of an extended $V_{c}^{\text{ext}}$ is given by Eq. (D). $$| V_{c}^{\text{ext}} | = \prod_{\part \in \part_E, \part \neq \part_E} |CT|_{\part}| \cdot |V_c|$$ Based on Eq. (D) we obtain Eq. (E) that generalizes Eq. (C). The cardinalities of the views $V$ Eq. (A). The number of eliminated tuples is then calculated by please follow on Fig. 4 below discussion. For the explanation $c_2 c_3$. The scheme of $V_3$ and $V_4$ are disjoint and intuitively $V_{35}=V_3 \cap V_5$. Here, $V_{35}$ is not materialized, we simply calculate $|V_3|=|V_5|=6$. Then, $|\beta_{35}|=12-(12+6-6)=0$. As we will see later, these steps are never performed in our algorithm. The fact that $c_5$ does not eliminate any tuple (see $c_5=0$ in Fig. 4) implies that neither do any of its super-combinations. Thus, a priori we know that $\alpha_{35}=\alpha_{235}=\dots=0$. Finally, we illustrate the case of a bigger size combination, for example $c_2 c_3 c_4$ of size 3. Eq. (E) yields $|\{V_2 \cup V_3 \cup V_4\}^{\epsilon x t}=[(V_2 \cup V_3 \cup V_4)^{\epsilon x t}] + |V_4^{\epsilon x t}| - [(V_2 \cap V_3 \cap V_4)^{\epsilon x t}] - [(V_2 \cap V_3)^{\epsilon x t}] + [(V_2 \cap V_3 \cap V_4)^{\epsilon x t}]$. All terms of the right side of the equation are available from previous iterations, except for $[(V_2 \cap V_3 \cap V_4)^{\epsilon x t}]$. As before, we check the common attributes of the views and obtain $V_3=V_2 \cap \{R_{1d, S_{1d}, T_{1d}}\}$. So, $(V_2 \cup V_3 \cup V_4)^{\epsilon x t}=6+6-2=10$ and $\beta_{234}=\alpha_{234}=12-9=3$. In the same way, we compute all the possible explanations until $c_1 c_2 c_3 c_4 c_5$. **View Materialization: when and how.** To decide when and how to materialize the views for the explanations, we partition the set of the views associated with the conditions in $E$. Consider the relation $\sim$ defined over these views by $V_i \sim V_j$ if the target schemas of $V_i$ and $V_j$ have at least one common attribute. Consider the transitive closure $\sim^*$ of $\sim$ and the induced partitioning of $V_2$ through $\sim^*$. When this partitioning is a singleton, $V_2'$ needs to be materialized (Alg. 2, line 9). The materialization of $V_2'$ is specified by joining the views associated with the sub-conditions, which may be done in more than one way, as usual. For example, for the combination $c_2c_3c_4$, $V_{234}$ can either be computed through $V_{23} \cup V_{4}$ or $V_{2} \cup V_{3} \cup V_{4}$ or $V_{2} \cup V_{3} \cup V_{4} \cup V_{4}$... because all these views are known from previous iterations. The choice of the view used to materialize $V_{2}$ is done based on a cost function. This function gives priority to materializing $V_{2}$ by means of one join, which is always possible: because $V_{2}$ needs to be materialized, we know that at least one view associated with a sub-combination of size $n-1$ has been materialized. In other words, priority is given to using at least one materialized view associated with one of the largest sub-combinations. For our example, it means that either $V_{2} \cup V_{4}$ or $V_{2} \cup V_{3} \cup V_{4}$ or $V_{2} \cup V_{3} \cup V_{4} \cup V_{4}$ is considered. In order to choose among the one-join queries computing $V_{2}$, we favor a one-join view $V_{2}$ minimal w.r.t. $|V_{2}| + |V_{4}|$. For the example, and considering also Fig. 4 we find that $|V_{2}| + |V_{4}|=|V_{2}|+|V_{3}|=5$ and $|V_{2}|+|V_{4}|=3$. So, the query used for the materialization is $V_{2} \cup V_{4}$ (its result being empty in our example). Nevertheless, we avoid the materialization of $V_{2}$ if the partitioning is a singleton (Alg. 2, line 9 & 16), when for some sub-combination $\mathcal{E}$ of $\mathcal{E}$ it was computed that $\alpha_{E}=0$. In that case, we know a priori that $\alpha_{E}=0$ (see Ex. 4.4). If the partitioning is not a singleton, $\mathcal{E}$ is not materialized (Alg. 2, line 14). For example, the partitioning for $c_{3}c_{6}$ is not a singleton and so the size $|V_{23}|=|V_{2}| \times |V_{3}|=6$. **Post-processing.** In Alg. 2 we associated with each possible explanation $\mathcal{E}$ the number of eliminated tuples $\alpha_{E}$. However, recall that the calculation of this number so far counts any tuple eliminated by $\mathcal{E}$, even though the same tuples may be eliminated by some super-combinations of $\mathcal{E}$ (see Ex. 4.5). This means that for some tuples, multiple explanations have been assigned. To make things even, the last step of the process, Alg. 2, line 14, is about calculating the coefficient of $\mathcal{E}$ by subtracting the coefficients of its super-combinations from $\alpha_{E}$: $$\text{coef}_{\mathcal{E}} = \alpha_{E} - \left( \sum_{\mathcal{E}<E} \text{coef}_{\mathcal{E}} \right)$$ (F) **EXAMPLE 4.5.** Consider known $\text{coef}_{1234}=2$ and $\text{coef}_{123}=2$. We have found in Ex. 4.4 that $\alpha_{234}=3$. With Eq. (F), $\text{coef}_{23}=-4-2-2=0$. In the same way $\text{coef}_{2}=4-0-2-2=0$. The algorithm leads to the expected Why-Not explanation polynomial already provided in Ex. 3.4. ### 4.3 Complexity analysis. In the pseudo-code for $\text{Ted}^{++}$ provided in Alg. 1, we can see that $\text{Ted}^{++}$ divides into the phases of (i) partitioning $S$, (ii) materializing a view for each partition, (iii) computing the explanations, and (iv) computing the exact coefficients. When computing the explanations, according to Alg. 2, $\text{Ted}^{++}$ iterates through $2^{|\mathcal{E}|}$ condition combinations and for each, it decides upon view materialization (again through partitioning) before materializing it, or simply calculates $|V_{E}|$ before applying equations to compute $\alpha_{E}$. Overall, we consider that all mathematical computations are negligible so, the worst case complexities of steps (i) through (iv) are $O(|S|+|WY|)+O(|S|)+O(2^{|\mathcal{E}|}(|S|+|C|))+O(2^{|\mathcal{E}|})$. For large enough queries, we can assume that $|S|+|C|<2^{2|\mathcal{E}|}$, in which case the complexity simplifies to $O(2^{|\mathcal{E}|})$. Obviously, the complexity analysis above does not take into account the cost of actually materializing views; in its simplified form, it only considers how many views need to be materialized in the worst case. Assume that $n=\text{max}(|I||\mathcal{E}\in S|)$. The materialization of any view is bound by the cost of materializing a cross product over the relations involved in the view - in the worst case $O(n^{n})$. This yields a combined complexity of $O(2^{|\mathcal{E}|}n^{n})$. However, $\text{Ted}^{++}$ in the general case (more than one induced partitions), has a tighter upper bound: $O(n^{k1}+n^{k2}+\ldots+n^{k|E|})$, where $k_{E}=|\mathcal{P}_{E}|$, for all combinations $\mathcal{E}$ and $N=2^{|\mathcal{E}|}$. It is easy to see that $n^{k1}+n^{k2}+\ldots+n^{k|E|}<2^{2|\mathcal{E}|}n^{n}$, when there is more than one partition. ### 5. EXPERIMENTAL EVALUATION This section presents an experimental evaluation of $\text{Ted}^{++}$, using real and synthetic datasets. In Sec. 5.1, we compare $\text{Ted}^{++}$ with the existing algorithms returning query-based explanations, i.e., with NedExplain [5] and Why-Not [9]. Sec. 5.2 studies the runtime of $\text{Ted}^{++}$ with respect to various parameters that we vary in a controlled manner. We have implemented the algorithms in Java. We ran the experiments on a Mac Book Air, running MAC OS X 10.9.5 with 1.8 GHz Intel Core i5, 4GB memory, and 120GB SSD. We used PostgreSQL 9.3 as database system. #### 5.1 Comparative Evaluation The comparative evaluation to Why-Not and NedExplain considers both efficiency (runtime) and effectiveness (explanation quality). When considering efficiency, we also include $\text{Ted}$ in the comparison ($\text{Ted}$ producing the same Why-Not explanation as $\text{Ted}^{++}$). For the experiments in this section, we have used data from three databases named crime, inmdb, and gov. The crime database corresponds to the sample crime database of the TRO system (available at http://infofab.stanford.edu/trio/) and was previously used to evaluate Why-Not and NedExplain. The data describes crimes and involved persons (suspects and witnesses). The inmdb database contains real-world movie data from IMDb (http://www.imdb.com). Finally, the gov database contains information about US congressmen and financial activities (data from http://bioguide.congress.gov, http://usaspending.gov, and http://earmarks.omb.gov). For each dataset, we have created a series of scenarios (crime1-gov5 in Tab. 3 - ignore remaining scenarios for now). Each scenario consists of a query further defined in Tab. 2 (Q1-Q7) and a simple Why-Not question, as Why-Not and NedExplain support only this type of Why-Not question. The queries have been designed to include queries with a small set of conditions (Q6) or a larger one (Q1, Q3, Q5, Q7), containing self-joins (Q3, Q4), having empty intermediate results (Q2), as well as containing inequalities (Q2, Q4, Q5, Q6). #### 5.1.1 Why-Not Evaluation Explanation In Tab. 1 we report that the explanations returned by Why-Not... and NedExplain consist of sets of query conditions, whereas Ted++ returns a polynomial of query conditions. For comparison purposes, we trivially map Ted++’s Why-Not explanation to sets of conditions, e.g., \(3c_3 = c_4 + 2c_3 + c_6\) maps to \(\{c_1, c_4\}, \{c_3, c_6\}\). For conciseness, we abbreviate condition sets, e.g., to \(c_1, c_4, c_5, c_6\). Tab. 4 summarizes the Why-Not explanations of the three algorithms. These scenarios make apparent that the explanations by NedExplain or Why-Not are incomplete, in two senses. First, they produce only a subset of the possible explanations, failing to provide alternatives that could be useful to the user when she tries to fix the query. Second, even the explanation they provide may lack variables, which can drive the user to fruitless fixing attempts. On the contrary, Ted++ produces all the possible, complete explanations. For the first argument, consider the scenario gov4. Why-Not and NedExplain return \(c_1\) and \(c_3\) respectively, but they both fail to indicate that both the explanations are valid, as opposed to Ted++ . Then, consider crime8. NedExplain returns the join \(c_2\) (\(S\cdot hain\), \(P\)) - Why-Not falsely does not produce any explanations in this case. Ted++ indicates that except for this join, the selection \(c_3\) (\(S\cdot name<34\)) from any explanation. From a developer’s perspective, selections are typically easier or more reasonable to change. So, having the complete set of explanations potentially provides the developer with useful alternatives. For the second argument consider crime4. NedExplain returns \(c_1\) (\(CM\cdot sector\), \(W\)). The explanation of Ted++ does not contain the atomic explanation \(c_1\), but there exist combinations including \(c_1\) as a part, like \(c_{15}\). This means that the explanation by NedExplain is incomplete; a repair attempt of \(c_1\) alone will never yield the desired results. Similarly, crime7 illustrates a case, where the Why-Not algorithm produces an explanation \(c_5\) that misses some parts. Then, in gov3 NedExplain and Why-Not both return \(c_2\). However, let us now assume the developer prefers not to change this condition. Keeping in mind that these algorithms’ answers may change when changing the query tree, she may start trying different trees to possibly obtain a Why-Not explanation without \(c_2\). Knowing the explanation of Ted++ prevents her from spending any effort on this, as it shows that all explanations contain \(c_2\) as a part. By mapping the explanation of Ted++ to sets of explanations, we have let aside an important property: the coefficients of the polynomial. For example, the complete Why-Not explanation polynomial of crime8 is \(2354 + c_2 + 20 + c_1 + 4 + c_6 + 8 + c_2\). Assume that the developer would like to recover at least five missing tuples, by changing as few conditions as possible. The polynomial suggests to change either \(c_3\) or \(c_2\): they both require one condition change and provide the possibility of obtaining up to 20 and 8 missing tuples, respectively. The coefficient of \(c_1\) being 4, does not make \(c_1\) a good candidate, whereas \(c_2\) and \(c_3\) require two condition changes. Clearly, the results of NedExplain or Why-Not are not informative enough for such a discussion. ### 5.1.2 Runtime Evaluation We now compare the performance w.r.t. runtime of Ted++ with the other algorithms. **Ted++ vs. NedExplain and Why-Not.** For this comparative evaluation, we again consider scenarios crime1 through gov5 of Tab. 3 as they involve simple Why-not questions, making them processable by all three algorithms. Fig. 5 summarizes the runtimes in logarithmic scale for each algorithm and scenario. We observe that the runtime of Ted++ is always comparable to the runtime of NedExplain and that in some cases, it is significantly faster than Why-Not. Why-Not traces compatible tuples based on tuple lineage stored in Trio. As already stated in [5, 9], this design choice slows down Why-Not performance. On the contrary, both NedExplain and Ted++ compute the compatible data more efficiently by issuing ![Figure 5: Runtimes for Ted++, Ted, NedExplain and Why-Not](image-url) Ted++ Analysis We now study Ted++’s behavior w.r.t. the following parameters: (i) the type (simple or complex) of the input query \( Q \) and the number of \( Q \)'s conditions, (ii) the type of the Why-Not question (simple or complex) and the number and selectivity of conditions the Why-Not question involves, and (iii) the size of the database instance \( T \). Note that (ii) and (iii) are tightly connected with the number of compatible tuples, which is one of the main parameters influencing the performance. In addition to the number of compatible tuples, another important factor is the selectivity of the query conditions over the compatible data (i). Experimental Setup. For the parameter variations (i) and (ii), we use again the crime, imdb, and gov databases. To adjust the database instance size for case (iii), we use data produced by the TPC-H benchmark data generator (http://www.tpc.org/tpch/). More specifically, we generate instances of 1GB and 10GB and further produce smaller data sets of 10MB and 100MB to obtain a series of datasets whose size differs by a factor of 10. In this paper, we report results for the original query Q3 of the TPC-H set of queries. It includes two complex and three simple conditions, two of which are inequality conditions. Since the original TPC-H query Q3 is an aggregation query, we adjust the query by removing the aggregation operator to avoid the introduction of new tuples. At the end of this paper, we discuss the impact of aggregation on the results of our experiments. Adjusting the query. Given a fixed database instance and Why-Not question, we start from query Q1 and gradually add simple conditions, yielding the series of queries Q1, Q2, Q3, Q4. The evolution of Ted++ runtime for this series of queries is shown in Fig. 7 (a). Similarly, starting from query Q1, we introduce step by step complex conditions, yielding Q5, Q6. Corresponding runtime results are reported in Fig. 7 (b). As expected, in both cases, increasing the number of query conditions (either complex or simple) results in increasing runtime. The incline of the curve depends on the selectivity of the introduced condition; the less selective the condition the steeper the line becomes. This is easy to explain, as in the coefficientEstimation phase, a view contains more tuples (passing partial tuples) when the condition is less selective. This results in more computations in the super-combinations iterations, leaving space for further opti- compatible tuples. The complex Why-Not question includes one complex and one simple condition. Adjusting the Why-Not question. Next, we vary the type and the number of conditions in the Why-Not question \(WN\). Fig. 8 shows the cases when we start (a) with a simple \( WN \) and progressively add more simple conditions and (b) start with a complex \( WN \) and progressively add more complex conditions. The scenarios considered for Fig. 8 (a) have as starting point the simple scenario \(crime_5\) (see Tab. 3). Then, keeping the same input instance and query, we add attribute-constant comparisons to \( WN\), a procedure resulting in fewer tuples in each step. As expected, the more conditions (the less tuples) the faster the Why-Not explanation is returned, until we reach a certain point (here from \(crime_5_{c3}\) on). From this point, the runtime is dominated by the time to communicate with the database that is constant over all scenarios. As we introduce complex conditions in the \( WN\), the number of generated partitions (potentially) drops as more relations are included in a same partition. To study the impact of the induced number of partitions in isolation, we keep the number of the compatible tuples constant in our series of complex scenarios \((imdb_{cc3},imdb_{cc2}, and imdb_{cc})\). The number of partitions entailed by \(imdb_{cc3}, imdb_{cc2}\), and \(imdb_{cc}\) are 3, 2, and 1, respectively. The results of Fig. 8 (b) confirm our theoretical complexity discussion, i.e., as the number of partitions decreases, the time needed to produce the Why-Not explanation increases. Increasing size of input instance. The last parameter we study is the input database size. To this end, we have created two scenarios, one with a simple and one with a complex Why-Not question \( WN\), and both using the same query \(Q_{5}\). We run both scenarios for database sizes 10MB, 100MB, 1GB, and 10GB. The simple \( WN \) includes two inequality conditions, in order to be able to compute a reasonable number of compatible tuples. The complex \( WN \) contains one complex condition, one inequality simple condition and one equality simple condition. It thus represents an average complex Why-Not question, creating two partitions over three relations. Fig. 9 (a) shows the runtimes for both scenarios. The increasing runtime is tightly coupled to the fact that the number of computed tuples is augmenting proportionally to the database size, as shown in Fig. 9 (b). We observe that for small datasets (<500MB) in the complex scenario \(Ted++\)’s performance decreases with a low rate, whereas the rate is higher for larger datasets. For the simple scenario, runtime deteriorates in a steady pace. This behavior is aligned with the theoretical study; when the number of partitions is decreasing the complexity rises. In summary, our experiments have shown that \(Ted++\) generates a more informative, useful and complete Why-Not explanation than the state of the art. Moreover, \(Ted++\) is competitive in terms of runtime. The dedicated experimental evaluation on \(Ted++\) verifies that it can be used in a large variety of scenarios with different parameters and that the obtained runtimes match the theoretical expectations. Finally, the fact that the experiments were conducted on an ordinary laptop, with no special capabilities in memory or disk space, supports \(Ted++\)’s feasibility. 6. CONCLUSION AND OUTLOOK This paper provides a framework for Why-Not explanations based on polynomials, which enables to consider relational databases under set, bag and probabilistic semantics in a unified way. To efficiently compute the Why-Not explanation polynomial under set semantics we have designed a new algorithm \(Ted++\), whose main feature is to completely avoid enumerating and iterating over the set of compatible tuples, thus it significantly reduces both space and time consumption. Our experimental evaluation showed that \(Ted++\) is at least as efficient as existing algorithms while providing useful insights in its Why-Not explanation for a developer. Also, we saw that \(Ted++\) scales well with various parameters, making it a practical solution. The proposed Why-Not explanation polynomial are easy to extend for unions of conjunctive queries, whereas an extension is not trivial for aggregation queries and is subject to future work. Currently, we have been working on exploiting the Why-Not explanation polynomial to efficiently rewrite a query in order to include the missing answers in its result set. As there are many rewriting possibilities, we plan to select the most promising ones based on a cost function, built with the polynomial. For instance, we may rank higher rewritings with minimum condition changes (i.e., small combinations), minimum side-effects (i.e., small coefficients), etc. 7. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01182104/file/BDA2015.pdf", "len_cl100k_base": 15717, "olmocr-version": "0.1.48", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 55105, "total-output-tokens": 17787, "length": "2e13", "weborganizer": {"__label__adult": 0.0005125999450683594, "__label__art_design": 0.0009889602661132812, "__label__crime_law": 0.0010986328125, "__label__education_jobs": 0.007358551025390625, "__label__entertainment": 0.00031113624572753906, "__label__fashion_beauty": 0.0003211498260498047, "__label__finance_business": 0.0009694099426269532, "__label__food_dining": 0.0005354881286621094, "__label__games": 0.0014095306396484375, "__label__hardware": 0.0009813308715820312, "__label__health": 0.00069427490234375, "__label__history": 0.0009860992431640625, "__label__home_hobbies": 0.00023281574249267575, "__label__industrial": 0.0007510185241699219, "__label__literature": 0.0013408660888671875, "__label__politics": 0.0005831718444824219, "__label__religion": 0.0007009506225585938, "__label__science_tech": 0.393798828125, "__label__social_life": 0.00031065940856933594, "__label__software": 0.04193115234375, "__label__software_dev": 0.54248046875, "__label__sports_fitness": 0.0002856254577636719, "__label__transportation": 0.0008721351623535156, "__label__travel": 0.000324249267578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62147, 0.0341]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62147, 0.49016]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62147, 0.83192]], "google_gemma-3-12b-it_contains_pii": [[0, 1058, false], [1058, 6648, null], [6648, 12794, null], [12794, 19091, null], [19091, 26728, null], [26728, 33419, null], [33419, 38436, null], [38436, 40475, null], [40475, 47296, null], [47296, 51497, null], [51497, 53987, null], [53987, 58368, null], [58368, 62147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1058, true], [1058, 6648, null], [6648, 12794, null], [12794, 19091, null], [19091, 26728, null], [26728, 33419, null], [33419, 38436, null], [38436, 40475, null], [40475, 47296, null], [47296, 51497, null], [51497, 53987, null], [53987, 58368, null], [58368, 62147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62147, null]], "pdf_page_numbers": [[0, 1058, 1], [1058, 6648, 2], [6648, 12794, 3], [12794, 19091, 4], [19091, 26728, 5], [26728, 33419, 6], [33419, 38436, 7], [38436, 40475, 8], [40475, 47296, 9], [47296, 51497, 10], [51497, 53987, 11], [53987, 58368, 12], [58368, 62147, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62147, 0.10989]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
155ac3926fb7a45aa65d0cd212ac11717eeb10aa
The Fault-Tree Compiler Anna L. Martensen Ricky W. Butler JANUARY 1987 NASA Technical Memorandum 89098 The Fault-Tree Compiler (NASA-TM-89098) THE FAULT-TREE COMPILER 40 p CSCF 098 N87-20765 Unclas G3/61 45405 NASA National Aeronautics and Space Administration Langley Research Center Hampton, Virginia 23665-5225 <table> <thead> <tr> <th>CONTENTS</th> <th></th> </tr> </thead> <tbody> <tr> <td>INTRODUCTION</td> <td>1</td> </tr> <tr> <td>Fault Tree Construction</td> <td>2</td> </tr> <tr> <td>THE FTC USER INTERFACE</td> <td>6</td> </tr> <tr> <td>Basic Program Concept</td> <td>6</td> </tr> <tr> <td>Fault Tree Definition Syntax</td> <td>6</td> </tr> <tr> <td>Hierarchical Fault Trees</td> <td>12</td> </tr> <tr> <td>FTC Commands</td> <td>15</td> </tr> <tr> <td>FTC Graphics</td> <td>18</td> </tr> <tr> <td>EXAMPLE FTC SESSIONS</td> <td>19</td> </tr> <tr> <td>Outline of a Typical Session</td> <td>19</td> </tr> <tr> <td>Examples</td> <td>19</td> </tr> <tr> <td>CONCLUDING REMARKS</td> <td>27</td> </tr> <tr> <td>REFERENCES</td> <td>28</td> </tr> <tr> <td>APPENDIX A</td> <td>A-1</td> </tr> <tr> <td>Theory</td> <td>A-1</td> </tr> <tr> <td>Solution Technique</td> <td>A-4</td> </tr> <tr> <td>APPENDIX B - ERROR MESSAGES</td> <td>B-1</td> </tr> </tbody> </table> Fault tree analysis was first developed in 1961–62 by H.A. Watson of Bell Telephone Laboratories under an Air Force study contract for the Minuteman Launch Control System. The use of fault trees has since gained wide-spread support and is often used as a failure analysis tool by reliability experts. Though conceptually simple, especially for those with a knowledge of basic circuit logic, the fault tree can be a powerful tool. In its basic form, the fault tree has limitations; however, recent advances in the tree notation have often overcome these shortcomings. Even the basic fault tree, though, can be useful in preliminary design analysis. The goal of the Fault Tree Compiler (FTC) program is to provide the user with a tool which can readily describe even the largest fault tree and, once given the tree description and event probabilities, can precisely calculate the probability of the top event in the tree. Automatic sensitivity analysis can also be performed, providing the user with a powerful design analysis capability. The motivation for the development of the Fault Tree Compiler began with the observation that the Computer Aided Reliability Estimation (CARE III) program (ref. 3) was often being used for the analysis of fault trees. Although CARE III can be used to solve fault trees, it was designed primarily to analyze complex reconfigurable systems where the fault-handling capabilities must be included in the reliability analysis. Therefore, it was not optimized for systems that can be described by a simple fault tree alone. The CARE III fault tree code provided a minimal framework for the FTC mathematical solution technique. A newer, faster solution was developed and implemented in FTC. To that was added a new front-end with a high-level language description of the fault tree. The improved solver and the Fault Tree Compiler’s input language and sensitivity analysis capabilities provide a powerful fault tree solver. In short: 1) The FTC program has a simple yet powerful input language, which is easily mastered. 2) Automatic sensitivity analysis is provided. 3) The mathematical solution technique is exact to the five digits in the solution(s). 4) A hierarchical capability is provided which can reduce effort and run time. 5) FTC is capable of handling common mode events, where the same event may appear more than once in the fault tree. The FTC solution technique has been implemented in FORTRAN, and the remaining code is Pascal. The program is designed to run on the Digital Corporation VAX computer operating under the VMS operating system. A short tutorial on the construction of fault trees is provided. The tutorial will outline the basic gate types allowed by the FTC program and their use in describing an example system of interest. **Fault Tree Construction** An example fault tree structure is shown in Figure 1. The event of interest, referred to as the top event, appears as the top level in the tree. Only one top event is allowed. Basic events are the lowest level of the fault tree, and different combinations of basic events will result in the top event. In Figure 1, the basic events are indicated by small circles. The user associates a probability of occurrence with each basic event in the tree. Note that a basic event may appear more than once in the FTC fault tree and is referred to as a common mode event. A useful feature of the FTC program is its ability to handle these common mode events. Events are combinations of basic events or other (lower) events. In Figure 1 the output of the OR gate is an event. Typical fault tree notation allows "comment" boxes to appear in the tree to describe an event. Though only two comment boxes appears in the example (to describe the hydraulic failure and the top event), boxes could have appeared above any event, basic or non-basic. Logic gates delineate the causal relations which ultimately result in the top event. In the FTC program the following gates are allowed: Figure 1 Fault trees are typically constructed by starting with the top event (usually an undesirable situation) and determining all possible ways to reach that event. This approach is often referred to as the "top down" or "backward" approach. An example of a "bottom up" or "forward" approach is the failure modes and effect analysis (FMEA), where the analyst starts with the different failure modes of the system components and traces the effects of the failures. The following short example illustrates the top down process by which a fault tree is constructed: The McDonnell Douglas F-15C fighter has three primary weapons systems: heat-seeking missiles, radar missiles, and the gun. Occasionally the guns will be inoperable, due possibly to one or more separate events. The fault tree shown in Figure 1 delineates the possible causes of in-flight gun no-fire. The pre-flight ground check includes the removal of several safety pins, including three pins which, once removed, will allow the gun to fire. A "rounds counter" on the plane determines the total number of rounds (bullets) to be fired. It is possible to completely restrict the firing of the gun with the proper rounds counter setting. The landing gear locked in the down position will also prevent the gun from firing. Additionally, loss of electric power to ignite the bullets, or hydraulic power to rotate the barrels will completely inhibit the gun. Loss of hydraulic power may occur if the hydraulic lines are severed, or the hydraulic fluid levels are low. It is not the goal of this paper to teach the construction of fault trees, however this simple example illustrates several important elements of fault tree modeling: 1) All basic events must be independent. In probability theory, two events A and B are independent if \[ P(AB) = P(A)P(B) \] It is important to note that it is often very difficult to establish independence of events. In this example, it is assumed that the safety pin removal and setting of the rounds counter are independent events, even though both are performed during the pre-flight ground check. 2) Sequences of events cannot be modeled with the gates allowed by the FTC program. For many systems of interest, an event Z occurs if and only if event A occurs before event B. If event B occurs before A, a different result is seen. At best, the analyst must define a basic event which is the result of some sequence of events and assign a probability to the basic event. 3) Mutually exclusive events must be handled with care. Basic events cannot be mutually exclusive. For example, basic event A cannot be defined as "Power on" and basic event B defined as "Power off." However, basic event A may be defined as "Power on", and an INVERT gate (which performs the probabilistic complement of the input) with basic event A as input may define the event "Power not on." 4) Typically, fault trees are developed to demonstrate the probability of some undesirable top event. A typical top event might be "Catastrophic System Failure." Generally, it is much faster to enumerate the ways that a system will fail than it is to enumerate the ways a system will succeed. Occasionally, however, it is more advantageous to create a "success" tree. The FTC program makes no distinction between the trees; it simply solves for the probability of the top event in a tree. 5) Basic events must be assigned a probability of occurrence. The FTC program also allows for failure rates to be assigned to basic events. The user must then supply a value for the mission time at which the probability of system failure will be evaluated. Parametric analysis is facilitated by allowing one basic event probability or rate to vary over a range of values. The syntax is described in the "FTC Fault Tree Definition Syntax" section of this paper. For more information on fault trees, Reference [2] is recommended. The next section discusses the user interface for the FTC program and example FTC sessions follow. Concluding remarks, Appendix A—The Solution Technique, and Appendix B—Error Messages complete the paper. THE FTC USER INTERFACE Basic Program Concept The user of the FTC program must define his fault tree using a simple language. This language will be discussed in detail in this section. There are two basic statements in the language - the basic event definition statement and the gate definition statement. The basic event definition statement defines a fundamental event and associates a probability with this event. For example, the statement \[ X: 0.002; \] defines a fundamental event which occurs with probability 0.002. The gate definition statement defines a gate of the fault tree by specifying the gate type and all inputs. For example, the statement: \[ G1: \text{AND}(Q12, V123, L12, E5); \] defines an and-gate with output G1 and inputs Q12, V123, L12 and E5. FTC Fault Tree Definition Syntax The basic event definition statement and the gate definition statement are the only essential ingredients of the FTC input language. However, the flexibility of the FTC program has been increased by adding several features commonly seen in programming languages such as FORTRAN or Pascal. The details of the FTC language are described in the following subsections. Lexical details - The probabilities assigned to events are floating point numbers. The Pascal REAL syntax is used for these numbers. Thus, all the following would be accepted by the FTC program: 0.001 12.34 1.2E-4 1E-5 The semicolon is used for statement termination. Therefore, more than one statement may be entered on a line. Comments may be included any place that blanks are allowed. The notation "(*)" indicates the beginning of a comment and "*)" indicates the termination of a comment. The following is an example of the use of a comment: GYRO_F: 0.025; (* PROBABILITY OF A GYRO FAILURE *) If statements are entered from a terminal (as opposed to by the READ command described below), then the carriage return is interpreted as a semicolon. Thus, interactive statements do not have to be terminated by an explicit semicolon unless more than one statement is entered on the line. In interactive mode, the FTC system will prompt the user for input by a line number followed by a question mark. For example, 1? The number is a count of the current line plus the number of syntactically correct lines entered into the system thus far. Constant definitions - The user may equate numbers to identifiers. Thereafter, these constant identifiers may be used instead of the numbers. For example, LAMBDA = 0.0052; RECOVER = 0.005; Constants may also be defined in terms of previously defined constants: \[ \text{GAMMA} = 10 \times \text{LAMBDA}; \] In general, the syntax is "name" = "expression"; where "name" is a string of up to eight letters, digits, and underscores (_) beginning with a letter, and "expression" is an arbitrary mathematical expression as described in a subsequent section entitled "Expressions". Variable definition - In order to facilitate parametric analyses, a single variable may be defined. A range is given for this variable. The FTC system will compute the system reliability as a function of this variable. If the system is run in graphics mode (to be described later), then a plot of this function will be made. The following statement defines LAMBDA as a variable with range 0.001 to 0.009: \[ \text{LAMBDA} = 0.001 \text{ TO } 0.009; \] Only one such variable may be defined. A special constant, POINTS, defines the number of points over this range to be computed. This constant can be defined any time before the RUN command. For example, \[ \text{POINTS} = 25 \] specifies that 25 values over the range of the variable should be computed. The method used to vary the variable over this range can be either geometric or arithmetic and is best explained by example. Thus, suppose POINTS = 4, then Geometric: \[ \text{XV} = 1 \text{ TO}^* 1000; \] where the values of XV used would be 1, 10, 100, and 1000. Arithmetic: \[ XV = 1 \text{ TO}^+ 1000; \] where the values of \( XV \) used would be 1, 333, 667, and 1000. The \(*\) following the \( \text{TO} \) implies a geometric range. A \( \text{TO}^+ \) or simply \( \text{TO} \) implies an arithmetic range. One additional option is available — the \( \text{BY} \) option. By following the above syntax with \( \text{BY} \) "inc", the value of POINTS is automatically set such that the value is varied by adding or multiplying the specified amount. For example, \[ V = 1\text{E}-6 \text{ TO}^* 1\text{E}-2 \text{ BY} 10; \] sets POINTS equal to 5 and the values of \( V \) used would be \( 1\text{E}-6, 1\text{E}-5, 1\text{E}-4, 1\text{E}-3, \) and \( 1\text{E}-2. \) The statement \[ Q = 3 \text{ TO}^+ 5 \text{ BY} 1; \] sets POINTS equal to 3, and the values of \( Q \) used would be 3, 4, and 5. In general, the syntax is \"var" = "expression" \text{ TO \{"c"\}} "expression" \text{ \{ BY "inc" \}} where \"var\" is a string of up to eight letters and digits beginning with a letter, \"expression\" is an arbitrary mathematical expression as described in the next section and the optional \"c\" is a \( + \) or \( * \). The \( \text{BY} \) clause is optional; if it is used, then \"inc\" is any arbitrary expression. Expressions - When defining constants or an event probability, arbitrary functions of the constants and the variable may be used. The following operators may be used: + addition - subtraction * multiplication / division ** exponentiation The following standard functions may be used: EXP(X) exponential function LN(X) natural logarithm SIN(X) sine function COS(X) cosine function ARCSIN(X) arc sine function ARCCOS(X) arc cosine function ARCTAN(X) arc tangent function SQRT(X) square root Both ( ) and [ ] may be used for grouping in the expressions. The following are permissible expressions: 2E-4 1 - [EXP( -LAMBDA*TIME)] Basic Event Definition - The fundamental events of the fault tree (i.e. events which are not the outputs of a gate in the tree) must be assigned probabilities. This is accomplished using the basic-event definition statement. This statement has the following syntax: <event-id> : <expression>; where <event-id> is the name of the event and <expression> is an expression defining the probability of the event which evaluates to a number between 0 and 1. Alternately, the user can specify the rate of an event. This can be accomplished using the following syntax: \[ \text{<event-id>} \rightarrow \text{<rate-expression>;} \] where \text{<event-id>} is the name of the event and \text{<rate-expression>} is an expression defining the rate of the event. The probability of the event is calculated by the program using the following formula: \[ \text{Prob[ event ]} = 1.0 - \text{EXP}( - \text{<rate-expression>*TIME} ) \] where \text{TIME} is the value of the special constant \text{TIME} which defines the mission time. If \text{TIME} is not defined by the user, then the program uses 10 for the mission time. Note that this formula represents the standard exponential distribution function, \[ F(t) = 1 - e^{-\lambda t} . \] **Gate Definition** — Once all of the fundamental events are defined, the gate definition statement may be used to define the structure of the fault tree. The syntax of this statement is \[ \text{<output-id>}: \left\{ \begin{array}{l} \text{OR} \\ \text{INV} \\ \text{XOR} \\ \text{AND} \end{array} \right\}( \text{<input>}, \text{<input>}, \ldots ); \] or \[ \text{<output-id>}: \text{<int> OF} ( \text{<input>}, \text{<input>}, \ldots ); \] The \text{<output-id>} is the name of the (non-basic) event which is the output of the gate. The type of gate is indicated by the reserved words OR, AND, INV, XOR or OF as follows AND - output probability is the probability of all events occurring OR - output probability is the probability of one or more events occurring XOR - output probability is the probability of one of the events, but not both, occurring (i.e. EXCLUSIVE OR gate) INV - output probability is the probabilistic complement of the input (i.e. INVERT gate). m OF - output probability is the probability of m or more events occurring (i.e. M of N gate) Any number of input events may be included within the parentheses. The following gate definition statements are valid: G1: AND(X, Y, Z); G2: OR(A1, A2, A3); PLANE_CRASH: 2 OF (ENGINE1_FAILS, ENGINE2_FAILS, HYDRAULIC_FAILURE); DESPAIR: OR(GET_THE_FLU, MOTHER_IN_LAW_STAYS_2 WEEKS, MEET_DAUGHTERS_NEW_BOYFRIEND); Hierarchical Fault Trees Often a system consists of several identical independent subsystems. In order to preserve the independence, it is necessary to replicate the subsystem fault tree in the system model. For example, suppose we have a system which contains four identical independent subsystems. The system fails when three of the subsystems fail. Each subsystem consists of four components. If any component fails, the subsystem fails. The following fault tree describes the subsystem: COMP_1: .01; COMP_2: .02; COMP_3: .03; COMP_4: .05; SUBSYSTEM_FAILS: OR(COMP_1, COMP_2, COMP_3, COMP_4); The system fault tree is as follows: SYSTEM_FAILS: 3 OF (SUBSYS_1_FAILS, SUBSYS_2_FAILS, SUBSYS_3_FAILS, SUBSYS_4_FAILS); Unfortunately, in order to integrate these sections into one fault tree, the subsystem has to be replicated four times (each replicate with different names): ``` SUBSYS 1 COMP 1: .01; SUBSYS 1 COMP 2: .02; SUBSYS 1 COMP 3: .03; SUBSYS 1 COMP 4: .05; SUBSYS 1 FAILS: OR(SUBSYS 1 COMP 1, SUBSYS 1 COMP 2, SUBSYS 1 COMP 3, SUBSYS 1 COMP 4); SUBSYS 2 COMP 1: .01; SUBSYS 2 COMP 2: .02; SUBSYS 2 COMP 3: .03; SUBSYS 2 COMP 4: .05; SUBSYS 2 FAILS: OR(SUBSYS 2 COMP 1, SUBSYS 2 COMP 2, SUBSYS 2 COMP 3, SUBSYS 2 COMP 4); SUBSYS 3 COMP 1: .01; SUBSYS 3 COMP 2: .02; SUBSYS 3 COMP 3: .03; SUBSYS 3 COMP 4: .05; SUBSYS 3 FAILS: OR(SUBSYS 3 COMP 1, SUBSYS 3 COMP 2, SUBSYS 3 COMP 3, SUBSYS 3 COMP 4); SUBSYS 4 COMP 1: .01; SUBSYS 4 COMP 2: .02; SUBSYS 4 COMP 3: .03; SUBSYS 4 COMP 4: .05; SUBSYS 4 FAILS: OR(SUBSYS 4 COMP 1, SUBSYS 4 COMP 2, SUBSYS 4 COMP 3, SUBSYS 4 COMP 4); SYSTEM FAILS: 3 OF (SUBSYS 1 FAILS, SUBSYS 2 FAILS, SUBSYS 3 FAILS, SUBSYS 4 FAILS); ``` Obviously, this is a tedious process. Therefore, the FTC program provides the user with a hierarchical fault-tree capability. The following model is semantically equivalent to the previous fault tree: ``` SUBTREE SUBSYSTEM FAILS; COMP 1: .01; COMP 2: .02; COMP 3: .03; COMP 4: .05; TOP: OR(COMP 1, COMP 2, COMP 3, COMP 4); ``` The model is defined in two sections. The first section defines a subtree which is named SUBSYSTEM_FAILS. This subtree is solved by the program and the probability of its top event is saved in the identifier SUBSYSTEM_FAILS. In subsequent trees or subtrees this identifier can be used. In the above model, four events in the main tree are given the probability of the subsystem, i.e. SUBSYSTEM_FAILS. To simplify the analysis of the effect of a system parameter on the probability of the top event, global variables and constants may be used. These must be defined before any subtrees are defined. The effect of the change in the failure probability of a component in the previous model could be investigated using the following model: \[ FP = .01 \text{ TO } .05 \text{ BY } .01; \] SUBTREE SUBSYSTEM_FAILS; \[ \text{COMP}_1: FP; \quad \text{COMP}_2: .02; \quad \text{COMP}_3: .03; \quad \text{COMP}_4: .05; \] TOP: OR(COMP_1,COMP_2,COMP_3,COMP_4); TREE SYSTEM_FAILS: \[ \text{SUBSYS}_1 \text{ FAILS: SUBSYSTEM FAILS; \ SUBSYS}_2 \text{ FAILS: SUBSYSTEM FAILS; \ SUBSYS}_3 \text{ FAILS: SUBSYSTEM FAILS; \ SUBSYS}_4 \text{ FAILS: SUBSYSTEM FAILS;} \] TOP: 3 OF (SUBSYS_1 FAILS, SUBSYS_2 FAILS, SUBSYS_3 FAILS, SUBSYS_4 FAILS); FTC Commands Two types of commands have been included in the user interface. The first type of command is initiated by a reserved word: EXIT INPUT PLOT READ RUN SHOW The second type of command is invoked by setting one of the special constants CARE3 ECHO LIST POINTS TIME equal to one of its pre-defined values. EXIT - The EXIT command causes termination of the FTC program. INPUT - This command increases the flexibility of the READ command. Within the model description file created with a text editor, INPUT commands can be inserted that will prompt for values of specified constants while the model file is being processed by the READ command. For example, the command INPUT LVAL; will prompt the user for a number as follows: LVAL? and a new constant LVAL is created that is equal to the value input by the user. Several constants can be interactively defined using one statement, for example: INPUT X, Y, Z; PLOT - The PLOT command can be used to plot the output on a graphics display device. This command is described in detail in the next section, "FTC Graphics." **READ** - A sequence of FTC statements may be read from a disk file. The following interactive command reads FTC statements from a disk file named SIFT.MOD: ``` READ SIFT.MOD; ``` If no file name extent is given, the default extent .MOD is assumed. A user can build a model description file using a text editor and use this command to read it into the FTC program. **RUN** - After a fault tree has been fully described to the FTC program, the RUN command is used to initiate the computation: ``` RUN; ``` The output is displayed on the terminal according to the LIST option specified. If the user wants the output written to a disk file instead, the following syntax is used: ``` RUN "outname"; ``` where the output file "outname" may be any permissible VAX VMS file name. Two positional parameters are available on the RUN command. These parameters enable the user to change the value of the special constants POINTS and LIST in the RUN command. For example ``` RUN (30,2) OUTFILE.DAT ``` is equivalent to the following sequence of commands: ``` POINTS = 30; LIST = 2; RUN OUTFILE.DAT ``` Each parameter is optional so the following are acceptable: RUN(10); — change POINTS to 10 then run. RUN(0); — change LIST to 0 and run. RUN(20,1); — change POINTS to 20 and LIST to 1 then run. SHOW — The value of an identifier may be displayed by the following command: SHOW ALPHA; CARE3 — If set equal to 1 the program will generate a file containing the fault tree in the CARE III syntax. The default value of 0 specifies than no CARE III file be written. The name of the generated file is CARE3.TRE. Note that the input range is completely specified, but that the upper value on the output range is specified by "X". The user must edit the file, supplying the appropriate upper value for the output range, and insert the tree into an otherwise complete CARE III input file. ECHO — The ECHO constant can be used to turn off the echo when reading a disk file. The default value of ECHO is 1, which causes the model description to be listed as it is read. (See example 4 in the section entitled "Example FTC Sessions.") LIST — The amount of information output by the program is controlled by this command. Two list modes are available as follows: LIST = 0; No output is sent to the terminal, but the results can still be displayed using the PLOT command. LIST = 1; Output sent to terminal. This is the default. POINTS — The POINTS constant specifies the number of points to be calculated over the range of the variable. The default value is 25. If no variable is defined, then this specification is ignored. **TIME** - The **TIME** constant specifies the mission time. The **TIME** constant has meaning only when the model includes failure rates, which depend upon time. For example, if the user sets **TIME** = 1.3, the program computes the probability of the top event at mission time equal to 1.3. The default value of **TIME** is 10. **FTC Graphics** Although the FTC program is easily used without graphics output, many users desire the increased user-friendliness of the tool when assisted by graphics. The FTC program can plot the probability of system failure as a function of any model parameter. The output from several FTC runs can be displayed together in the form of contour plots. Thus, the effect on system reliability of two model parameters can be illustrated on one plot. **PLOT command** - After a RUN command, the PLOT command can be used to plot the output on the graphics display. The syntax is ``` PLOT <op>, <op>, ... <op> ``` where <op> are plot options. Any TEMPLATE "USET" or "UPSET" parameter can be used, but the following are the most useful: - **XLOG** - plot x-axis using logarithmic scale - **YLOG** - plot y-axis using logarithmic scale - **XYLOG** - plot both x- and y-axes using logarithmic scales - **NOLO** - plot x- and y-axes with normal scaling - **XLEN=5.0** - set x-axis length to 5.0 in. - **YLEN=8.0** - set y-axis length to 8.0 in. - **XMIN=2.0** - set x-origin 2 in. from left side of screen - **YMIN=2.0** - set y-origin 2 in. above bottom of screen The PLOTINIT and PLOT+ commands are used to display multiple runs on one plot. A single run of FTC generates unreliability as a function of a single variable. To see the effect of a second variable (i.e. display contours of a 3-dimensional surface) the PLOT+ command is used. The PLOTINIT command should be called before performing the first FTC run. This command defines the second variable (i.e. the contour variable): PLOTINIT BETA; This defines BETA as the second independent variable. Next, the user must set BETA to its first value. After the run is complete, the output is plotted using the PLOT+ command. The parameters of this command are identical to the PLOT command. The only difference is that the data is saved, so it can be displayed in conjunction with subsequent run data. Next, BETA must be set to a second value, another FTC run made, and PLOT+ must be called again. This time both outputs will be displayed together. Up to ten such runs can be displayed together. EXAMPLE FTC SESSIONS Outline of a Typical Session The FTC program was designed for interactive use. The following method of use is recommended: 1. Using a text editor, create a file of FTC commands describing the fault tree to be analyzed. 2. Start the FTC program and use the READ command to retrieve the model information from this file. 3. Then, various commands may be used to change the values of the special constants, such as LIST, POINTS, etc., as desired. Altering the value of a constant identifier does not affect any transitions entered previously even though they were defined using a different value for the constant. The range of the variable may be changed after transitions are entered. 4. Enter the RUN command to initiate the computation. Examples The following examples illustrate interactive FTC sessions. For clarity, all user inputs are given in lower-case letters. Example 1 - This session illustrates direct interactive input and the type of error messages given by FTC: $ FTC FTC V1.0 NASA Langley Research Center 1? lambda = 1e-4; 2? X: 1.0 - exp( -lambda*time); 3? Y: 1.0 - exp( -lambda*time); ^ IDENTIFIER NOT DEFINED 3? Y: 1.0 - exp( -lambda*time); 4? top: or(x,y); 5? run ---------- Pr[TOP EVENT] ---------- 1.99800E-03 *** WARNING: SYNTAX ERRORS PRESENT BEFORE RUN 0.420 SECS. CPU TIME UTILIZED 6? exit The warning message is simply informative. If a user receives this message, he should check his input file to make sure that the model description is correct. In this example, since the syntax error was corrected in the next line, the model was correct. A complete list of program-generated error messages is given in APPENDIX B. Example 2 - This example demonstrates the use of the hierarchical fault tree capability to partially describe an aircraft pitch control architecture. The proposed architecture is composed of four independent actuator subsystems and the supporting hydraulic and electronic systems. Each of the actuator subsystems is comprised of a pitch rate sensor, a computer, and the actuator. Two of the four actuator subsystems failing will result in loss of pitch control. Likewise, loss of either the hydraulic or electronic system will cause loss of pitch control. Example 3 - This example illustrates the use of the FTC program to process the fault tree used in the Integrated Airframe Propulsion Control System Architecture (IAPSA II) project to analyze surface control failures. (See ref. 1.) The surface control system has three separate actuation channels each consisting of an actuation stage and a disengage device stage. The actuation channels are brickwalled with force voting at the control surface. Channel self-monitoring techniques are the primary method of fault detection and isolation. Each actuation channel contains two special devices for fault tolerance. The disengage device can deactivate a faulty channel. The surface can be controlled by one channel if the other two channels have been deactivated. Additionally, an override device in each channel allows two good channels to overpower a channel with a failed disengage device. Thus, surface failure (top event) can occur in two ways: (1) loss of all three actuation channels, (2) loss of two channels when one of the lost channels has a failed disengage device. The following tree describes these aspects of the system failure process: TREE SURFACE_FAILURE; CH1: CH_FAULT; (* Channel 1 failure *) CH2: CH_FAULT; (* Channel 2 failure *) CH3: CH_FAULT; (* Channel 3 failure *) DD1 -> 6.0E-6; (* Channel 1 disengage device failure *) DD2 -> 6.0E-6; (* Channel 2 disengage device failure *) DD3 -> 6.0E-6; (* Channel 3 disengage device failure *) LOSS_OF_ALL_CHANNELS: AND( CH1, CH2, CH3); SF1A: AND(CH1, DD1); SF1B: OR(CH2, CH3); CHANNEL1_UNISOLATED: AND(SF1A, SF1B); SF2A: AND(CH2, DD2); SF2B: OR(CH1, CH3); CHANNEL2_UNISOLATED: AND(SF2A, SF2B); SF3A: AND(CH3, DD3); SF3B: OR(CH1, CH2); CHANNEL3_UNISOLATED: AND(SF3A, SF3B); TOP: OR (LOSS_OF_ALL_CHANNELS, CHANNEL1_UNISOLATED, CHANNEL2_UNISOLATED, CHANNEL3_UNISOLATED); Next, the failures leading to an actuation channel breakdown must be enumerated in a subtree. An actuation channel failure can occur because of the loss of the I/S bus, lack of two surface commands, or a fault in the actuation channel elements — the I/S bus terminal, the elevator processor, and the electrical and mechanical actuation hardware. Surface commands can be lost due to command generation faults or computer bus terminal faults. The following subtree describes actuation channel failure: **SUBTREE CH_FAULT:** \begin{align*} C1\text{ COMMAND: COMMAND FAILURE; } & \quad (* \text{ Loss of command 1 }*) \\ C2\text{ COMMAND: COMMAND FAILURE; } & \quad (* \text{ Loss of command 2 }*) \\ C3\text{ COMMAND: COMMAND FAILURE; } & \quad (* \text{ Loss of command 3 }*) \\ C4\text{ COMMAND: COMMAND FAILURE; } & \quad (* \text{ Loss of command 4 }*) \\ C11 & \rightarrow 1E-6; \quad (* \text{ computer 1 bus terminal fault }*) \\ C21 & \rightarrow 1E-6; \quad (* \text{ computer 2 bus terminal fault }*) \\ C31 & \rightarrow 1E-6; \quad (* \text{ computer 3 bus terminal fault }*) \\ C41 & \rightarrow 1E-6; \quad (* \text{ computer 4 bus terminal fault }*) \\ ACT1 & \rightarrow 90E-6; \quad (* \text{ fault in actuation channel elements }*) \\ B1 & \rightarrow 20E-6; \quad (* \text{ failure in I/S bus }*) \\ \end{align*} \begin{align*} A1: & \text{ OR(C1 COMMAND, C11); } \\ A2: & \text{ OR(C2 COMMAND, C21); } \\ A3: & \text{ OR(C3 COMMAND, C31); } \\ A4: & \text{ OR(C4 COMMAND, C41); } \\ \end{align*} ``` LOSE\_TWO\_COMMANDS: 3 OF (AC1, AC2, AC3, AC4); TOP: OR(ACT1, B1, LOSE\_TWO\_COMMANDS); ``` A command generation fault can occur due to lack of PCS data, IRADC data, or computer failure. The loss of data can be due to data source failure or I/S bus failure or computer bus terminal failure: **SUBTREE COMMAND\_FAILURE:** \begin{align*} PC1 & \rightarrow 11.0E-6; \quad (* \text{ loss of channel 1 PCS data }*) \\ B1 & \rightarrow 20E-6; \quad (* \text{ failure in channel 1 I/S bus }*) \\ C11 & \rightarrow 1E-6; \quad (* \text{ bus terminal to channel 1 fault }*) \\ IRADC1 & \rightarrow 122.5E-6; \quad (* \text{ loss of channel 1 IRADC data }*) \\ PC2 & \rightarrow 11.0E-6; \quad (* \text{ loss of channel 2 PCS data }*) \\ B2 & \rightarrow 20E-6; \quad (* \text{ failure in channel 2 I/S bus }*) \\ C12 & \rightarrow 1E-6; \quad (* \text{ bus terminal to channel 2 fault }*) \\ IRADC2 & \rightarrow 122.5E-6; \quad (* \text{ loss of channel 2 IRADC data }*) \\ PC3 & \rightarrow 11.0E-6; \quad (* \text{ loss of channel 3 PCS data }*) \\ B3 & \rightarrow 20E-6; \quad (* \text{ failure in channel 3 I/S bus }*) \\ C13 & \rightarrow 1E-6; \quad (* \text{ bus terminal to channel 3 fault }*) \\ IRADC3 & \rightarrow 122.5E-6; \quad (* \text{ loss of channel 3 IRADC data }*) \end{align*} CLC1 \rightarrow 100.0E-6; \quad (* \text{computer failure rate} *) LPD1: OR(PCS1, B1, C11); LPD2: OR(PCS2, B2, C12); LPD3: OR(PCS3, B3, C13); PCS\_DATA\_LOSS: AND(LPD1, LPD2, LPD3); LID1: OR(IRADC1, B1, C11); LID2: OR(IRADC2, B2, C12); LID3: OR(IRADC3, B3, C13); IRADC\_DATA\_LOSS: AND(LID1, LID2, LID3); TOP: OR(PCS\_DATA\_LOSS, IRADC\_DATA\_LOSS, CLC1); The above model was available in file IAPSA.MOD prior to the following interactive session. $ FTC FTC V1.0 NASA Langley Research Center 1? echo = 0 2? read IAPSA 45? run \[ \text{Pr[TOP EVENT]} \quad \text{Pr[TOP EVENT]} \] \[ \begin{array}{c} \text{1.76344E-09} \\ \end{array} \] 12.230 SECS. CPU TIME UTILIZED Example 4 - This example illustrates the use of the program to investigate the sensitivity of a fault tree to a parameter. $ FTC$ FTC V1.0 NASA Langley Research Center 1? READ EX5; 2: V = 0 TO 1 BY .1; 3: G11: V; 4: G12: V/2; 5: G13: SQRT(V); 6: G14: 1 - V; 7: G15: 1 - V/2; 8: G16: 1 - SQRT(V); 9: G17: V*(1-V); 10: G18: (1-V)*(1-V*V)*(1-V*3); 11: 12: A21: AND(G11,G12,G13); 13: A22: OR(G12,G13); 14: A23: XOR(G13,G14); 15: A24: 3OF(G15,G16,G17,G18); 16: A25: AND(G16,G17); 17: 18: B31: OR(A21,A25); 19: B32: INV(A22); 20: B33: OR(A24,A22); 21: 22: C41: AND(B31,A23); 23: C42: AND(B33,B32,A22); 24: 25: TOP: 2OF(C41,C42,A25,A23); 26? run <table> <thead> <tr> <th>V</th> <th>Pr[TOP EVENT]</th> </tr> </thead> <tbody> <tr> <td>0.000000E+00</td> <td>0.000000E+00</td> </tr> <tr> <td>1.000000E-01</td> <td>3.996555E-02</td> </tr> <tr> <td>2.000000E-01</td> <td>4.86548E-02</td> </tr> <tr> <td>3.000000E-01</td> <td>5.23680E-02</td> </tr> <tr> <td>4.000000E-01</td> <td>6.02219E-02</td> </tr> <tr> <td>5.000000E-01</td> <td>7.75698E-02</td> </tr> <tr> <td>6.000000E-01</td> <td>1.09150E-01</td> </tr> <tr> <td>7.000000E-01</td> <td>1.60335E-01</td> </tr> <tr> <td>8.000000E-01</td> <td>2.37549E-01</td> </tr> <tr> <td>9.000000E-01</td> <td>3.48165E-01</td> </tr> <tr> <td>1.000000E+00</td> <td>5.00000E-01</td> </tr> </tbody> </table> 4.090 SECS. CPU TIME UTILIZED 27? PLOT 28? DISP COPY (See Figure 2) Figure 2 CONCLUDING REMARKS The Fault-Tree Compiler is a new reliability analysis program based on combinatorial mathematics. The program has three major strengths: 1) the input language is easy to understand and learn, 2) automatic sensitivity analysis is allowed by varying a parameter over a range of values, and 3) the answer provided by the program is precise. Additionally, the use of the hierarchical fault tree capability can reduce model complexity. The program can be used for an exact analysis, or a simple analysis of any system of interest. REFERENCES APPENDIX A Theory The Fault Tree Compiler program solution technique relies upon three basic model assumptions: 1) System components, or basic events, fail independently, 2) Components are either failed or operational; an "in-between" state does not exist. 3) The system is either failed or operational; no "in-between" state exists. Figure A1 illustrates a representative fault tree. In the following discussion, the fault tree is generalized to have \( n \) basic events and a probability of occurrence associated with each. Basic events will be referred to as "components" and a probability of "failure" will be attributed to each of the components in the system. Let \( A_{i0} \) represent the event component \( A_i \) has not failed \( A_{i1} \) represent the event component \( A_i \) has failed. By defining the variable \( v_i \) as \[ v_i = \begin{cases} 0 & \text{if system component } i \text{ has not failed} \\ 1 & \text{if system component } i \text{ has failed,} \end{cases} \quad 1 \leq i \leq n \] we have, by independence of the n basic events, \[ P(A_{1v_1} A_{2v_2} \ldots A_{nv_n}) = P(A_{1v_1})P(A_{2v_2}) \ldots P(A_{nv_n}). \] \[ = \prod_{i=1}^{n} P(A_{iv_i}) \] \[ = \prod_{i=1}^{n} \left( (1-v_i)P(A_{i0}) + P(A_{i1})v_i \right). \] It is possible to enumerate all combinations of components-failed and components-operational describing the different possible states of the system. Each system state can be represented by an n-dimensional "binary" vector composed of 1's and 0's, where 1 indicates that the component has failed and 0 indicates that the component has not failed. The event that all n components fail, for example, would be represented by the vector \[(1 \ 1 \ 1 \ \ldots \ 1).\] A system composed of four basic events would generate the following binary vectors: 1. \((0 \ 0 \ 0 \ 0)\) 5. \((0 \ 0 \ 0 \ 1)\) 9. \((0 \ 1 \ 1 \ 0)\) 13. \((1 \ 1 \ 0 \ 1)\) 2. \((1 \ 0 \ 0 \ 0)\) 6. \((1 \ 1 \ 0 \ 0)\) 10. \((0 \ 1 \ 0 \ 1)\) 14. \((1 \ 0 \ 1 \ 1)\) 3. \((0 \ 1 \ 0 \ 0)\) 7. \((1 \ 0 \ 1 \ 0)\) 11. \((0 \ 0 \ 1 \ 1)\) 15. \((0 \ 1 \ 1 \ 1)\) 4. \((0 \ 0 \ 1 \ 0)\) 8. \((1 \ 0 \ 0 \ 1)\) 12. \((1 \ 1 \ 1 \ 0)\) 16. \((1 \ 1 \ 1 \ 1)\) Clearly, for an n-component system there are \(2^n\) possible binary vectors representing \(2^n\) distinct system states. The jth system state is denoted by \(s_j\). and its associated probability by \( P(s_j) \). Note that the jth binary vector can be written in terms of the variables \( v_{1j}, (v_{1j}, v_{2j}, \ldots, v_{nj}) \), and that the probability of the jth system state is \[ P(s_j) = \prod_{i=1}^{n} \left( (1-v_{ij})P(A_{i0}) + P(A_{i1})v_{ij} \right). \] The sample space \( S \) is the set of all possible system states denoted by the \( 2^n \) binary vectors. By definition, \[ P(S) = 1. \] Because the components are either failed or not-failed, the \( 2^n \) binary vectors exhaustively describe all possible system states. Therefore, \[ P(s_1+s_2+\ldots+s_{2^n}) = P(S) = 1. \] Clearly, the system can be in one and only one state at any given time, indicating that the system states are mutually exclusive. By definition, \[ P(s_is_j) = 0 \quad \text{for every } i \text{ and } j \neq i \] and, for the \( 2^n \) mutually exclusive system states, \[ P(s_1+s_2+\ldots+s_{2^n}) = P(s_1) + P(s_2) + \ldots + P(s_{2^n}). \] To calculate the probability of system failure, the sample space \( S \) composed of \( 2^n \) system states is divided into two subsets, where one subset contains all states representing system failure, and the other subset contains all states that represent system operational. Because the system must be either failed or operational these two subsets are clearly exhaustive and mutually exclusive. Furthermore the system states composing each of these two subsets are mutually exclusive, and the probability of either subset can be found by summing the appropriate individual system state probabilities. Therefore, calculating the total probability of system failure by simply summing the probabilities of the system configurations that represent system failure is exact. The Solution Technique The program solution technique generates binary vectors in an orderly fashion, starting with binary vector \[(0 \ 0 \ 0 \ \cdots \ 0_n)\]. Vectors are checked through the user-defined system tree; for each binary vector found to represent a system-fail configuration its probability is added to a running total of binary vector probabilities, where all vector probabilities in the sum represent system fail configurations. As shown above, the total number of binary vectors to be checked through the system tree is \(2^n\). For systems with many components, the number of fault vectors to check through the system tree can be very large. A simple and effective pruning technique has been developed to reduce the total number of fault vectors checked through the system tree. The pruning technique will not affect the FTC program answer; it will reduce run-time and improve efficiency. Consider a system composed of ten highly reliable components. The probability that any one component fails is low; the probability that all components fail is much, much lower. It is often true that the latter probability, when summed with the former probability, will not affect the answer within five or even ten or more significant digits. A threshold value can be established, and binary vectors with probabilities less than the threshold may be disregarded. The FTC program establishes a threshold by first calculating a "weight" function. A worst case scenario is solved, where it is assumed that \(2^n-1\) vectors, all of value \(10^{-k}\), are to be pruned. The sum of all the values must be less than or equal to \(0.5 \times 10^{-d}\) (\(d = \) the number of significant digits in the FTC answer). The function \[(2^n-1) \times 10^{-k} \leq 0.5 \times 10^{-d}\] is solved for the smallest integer value of \(k\) that satisfies the equation. Once a binary vector representing a system-fail configuration is found, its The solution algorithm can be roughly summarized as follows: 1) Calculate the weight function (shown above). 2) Rank the probabilities of failure. Two fault vectors are maintained, IVEC and JVEC. IVEC represents the basic events in the order in which they were entered in the model. JVEC, on the other hand, ranks the probabilities of failure and orders the basic events from highest $P(f)$ to lowest. The need for two vector representations will become clear shortly. 3) Select the $(0 \ 0 \ 0 \ \cdots \ 0_n)$ fault vector. 4) Using the IVEC representation, check to see if the fault vector represents a system fail state. 5) If the fault vector represents a system fail state, then If this is the first system fail fault vector to be found, calculate the probability of the set of events represented by the fault vector. Then multiply this value by the weight function to obtain the threshold value, CUTOFF. Lastly, initialize the total probability of system failure to the fault vector probability. If this is not the first system fail fault vector, simply add its probability of occurrence to the total probability of failure. 6) Increment the JVEC fault vector, and generate the equivalent IVEC fault vector. By incrementing the JVEC fault vector and then converting to IVEC, the fault vectors are generated in patterns of decreasing probabilities. 7) If at least one fault vector has been found representing a system fail state, calculate the new fault vector's probability of occurrence, PFVOCC. If a PFVOCC value is calculated, it is compared to the CUTOFF value. If $PFVOCC < CUTOFF$, then the next JVEC fault vector with a PFVOCC greater than CUTOFF is generated. If no such vector exists, or the vector $(1 \ 1 \ 1 \ \cdots \ 1_n)$ has been reached, the program jumps to step 9. 8) Loop to step 4. 9) The total probability of failure is passed to the calling program. Fault tree solvers exist; it is felt that the FTC program is superior in its tree input language and specification syntax. The use of subtrees to reduce run time and aid in the analysis of large trees is especially powerful. The probability of the top event in the system tree is exact to five significant digits. It is, however, the user's responsibility to assure that basic events in the tree are independent, and that the tree is semantically correct. The fault tree is often satisfactory for design analysis and system failure studies. Though the fault tree methodology may have limitations for the detailed analysis of complex systems containing dependencies, it can be very useful in preliminary design reviews. APPENDIX B Error Messages Error and warning messages are listed in alphabetical order, with messages beginning with a symbol (i.e. =, ], ;) listed at the end. ALREADY DEFINED AS A GLOBAL CONSTANT - The value defined has been defined previously as a global constant. ALREADY DEFINED AS A GATE OUTPUT OR EVENT - The value has been defined previously as a gate output or event. ALREADY DEFINED AS A LOCAL CONSTANT - The value has been previously defined as a local constant. ALREADY DEFINED AS A RESERVED WORD - The value defined is an FTC reserved word. ALREADY DEFINED AS A SUBTREE - The value defined has previously been defined as a subtree title. ARGUMENT TO EXP FUNCTION MUST BE < 8.80289E+01 - The argument to the EXP function is too large. ARGUMENT TO LN OR SQRT FUNCTION MUST BE > 0 - The LN and SQRT functions require positive arguments. ARGUMENT TO STANDARD FUNCTION MISSING - No argument was supplied for a standard function. COMMA EXPECTED - Syntax error; a comma is needed. CONSTANT EXPECTED - Syntax error; a constant is expected. DIVISION BY ZERO NOT ALLOWED - A division by 0 was encountered when evaluating the expression. EVENT PROBABILITY GREATER THAN 1 - The event probability was evaluated to a value greater than 1. EXP FUNCTION OVERFLOW - The argument to the EXP function is too large. The value of the argument must be less than 8.80289E+01. EXPRESSION CANNOT CONTAIN THE VARIABLE - The variable cannot be defined in terms of itself. EXPRESSION OVERFLOW - The value of the expression caused arithmetic overflow. FILE NAME EXPECTED - Syntax error; the file name is missing. FILE NAME TOO LONG - File names must be 80 or less characters. IDENTIFIER EXPECTED - Syntax error; the file name is missing. IDENTIFIER NOT DEFINED - The identifier entered has not yet been defined. ILLEGAL CHARACTER - The character used is not recognized by the FTC program. ILLEGAL LN OR SQRT ARGUMENT - The LN and SQRT functions require positive arguments. ILLEGAL STATEMENT - The command word is unknown to the program. ILLEGAL NUMBER OF INPUTS TO GATE - The AND and OR gates may have an arbitrary number of inputs; however, the INVERT gate must have only one input, the EXCLUSIVE OR gate must have two inputs, and the M OR N gate must have the number of inputs such that \( N - M \geq 0 \). INPUT ALREADY DEFINED AS A VARIABLE - The gate or variable defined in the statement has already been defined globally as a variable. INPUT LINE TOO LONG - The command line exceeds the 100 character limit. INTEGER EXPECTED - Syntax error; an integer is expected. INV GATE MUST HAVE ONLY 1 INPUT - Only one input is allowed for the INVERT gate. MUST BE IN "READ" MODE - The INPUT command can be used only in a file processed by a READ command. NOT A VALID EVENT - Events used as gate inputs must be previously defined as a basic event or the output from a previous gate. NO GATES IN FAULT TREE - The fault tree contains no gates. NUMBER TOO LONG - Only 15 digits/characters allowed per number. ONLY 1 VARIABLE ALLOWED - Only one variable can be defined per complete fault tree. REAL EXPECTED - A floating point number is expected here. SEMICOLON EXPECTED - Syntax error; a semicolon is needed. SUB-EXPRESSION TOO LARGE, i.e. \( > 1.700000E+38 \) - An overflow condition was encountered when evaluating the expression. SUBTREE RESULT NOT FOUND - The fault tree was unable to calculate subtree top event probabilities. Check for syntax errors in the subtrees. TOP NOT REACHABLE - No combination of events led to the top event in the system tree. UNKNOWN GATE TYPE - Verify that the gate type is AND, OR, INV, XOR, or \( M \) OF \( \langle \rangle \). See the "Gate Definition" section of this paper for more information. VARIABLE MUST BE DEFINED AT GLOBAL LEVEL - Variables may NOT be defined within subtrees; variables must be defined globally. VMS FILE NOT FOUND - The file indicated on the READ command is not present on the disk. (Note: make sure your default directory is correct.) *** WARNING: VARIABLE CHANGED TO A CONSTANT! PREVIOUS EVENTS MAY BE WRONG - If previous basic events have been defined using a variable and the variable name is changed, inconsistencies may appear in the results. *** WARNING: SYNTAX ERRORS PRESENT BEFORE RUN - Syntax errors were present during the model description process. They may or may not have been corrected prior to the run. *** WARNING: RUN-TIME PROCESSING ERRORS - Computation overflow occurred during execution. *** WARNING: REMAINDER ON INPUT LINE IGNORED - The information on the rest of the input line is disregarded. = EXPECTED - Syntax error; the = operator is needed. ] EXPECTED - A right bracket is missing in the expression. < EXPECTED - Syntax error; the < symbol is needed. ) EXPECTED - A right parenthesis is missing in the expression. ( EXPECTED - A left parenthesis is missing in the expression. ## Abstract The Fault Tree Compiler Program is a new reliability tool used to predict the top-event probability for a fault tree. Five different gate types are allowed in the fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N gates. The high-level input language is easy to understand and use when describing the system tree. In addition, the use of the hierarchical fault tree capability can simplify the tree description and decrease program execution time. The current solution technique provides an answer precise (within the limits of double precision floating point arithmetic) to the five digits in the answer. The user may vary one failure rate or failure probability over a range of values and plot the results for sensitivity analyses. The solution technique is implemented in FORTRAN; the remaining program code is implemented in Pascal. The program is written to run on a Digital Corporation VAX with the VMS operation system.
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19870011332.pdf", "len_cl100k_base": 13435, "olmocr-version": "0.1.50", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 85421, "total-output-tokens": 15691, "length": "2e13", "weborganizer": {"__label__adult": 0.0003528594970703125, "__label__art_design": 0.000579833984375, "__label__crime_law": 0.0003688335418701172, "__label__education_jobs": 0.001575469970703125, "__label__entertainment": 0.0001302957534790039, "__label__fashion_beauty": 0.0002338886260986328, "__label__finance_business": 0.000492095947265625, "__label__food_dining": 0.00043845176696777344, "__label__games": 0.0010442733764648438, "__label__hardware": 0.006008148193359375, "__label__health": 0.0005078315734863281, "__label__history": 0.0004875659942626953, "__label__home_hobbies": 0.000209808349609375, "__label__industrial": 0.0029277801513671875, "__label__literature": 0.0002942085266113281, "__label__politics": 0.0003237724304199219, "__label__religion": 0.0006914138793945312, "__label__science_tech": 0.3779296875, "__label__social_life": 0.00010091066360473631, "__label__software": 0.0250701904296875, "__label__software_dev": 0.57861328125, "__label__sports_fitness": 0.00038552284240722656, "__label__transportation": 0.0010328292846679688, "__label__travel": 0.0001900196075439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52118, 0.07736]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52118, 0.68802]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52118, 0.87086]], "google_gemma-3-12b-it_contains_pii": [[0, 399, false], [399, 1891, null], [1891, 4080, null], [4080, 5882, null], [5882, 5891, null], [5891, 7112, null], [7112, 9707, null], [9707, 11155, null], [11155, 12498, null], [12498, 13918, null], [13918, 15194, null], [15194, 16278, null], [16278, 17682, null], [17682, 19161, null], [19161, 20452, null], [20452, 21688, null], [21688, 22782, null], [22782, 23883, null], [23883, 25401, null], [25401, 27141, null], [27141, 28799, null], [28799, 30139, null], [30139, 30370, null], [30370, 32283, null], [32283, 34788, null], [34788, 35466, null], [35466, 36587, null], [36587, 36596, null], [36596, 37142, null], [37142, 37593, null], [37593, 38264, null], [38264, 39957, null], [39957, 41719, null], [41719, 43661, null], [43661, 45562, null], [45562, 46282, null], [46282, 47959, null], [47959, 50029, null], [50029, 51176, null], [51176, 52118, null]], "google_gemma-3-12b-it_is_public_document": [[0, 399, true], [399, 1891, null], [1891, 4080, null], [4080, 5882, null], [5882, 5891, null], [5891, 7112, null], [7112, 9707, null], [9707, 11155, null], [11155, 12498, null], [12498, 13918, null], [13918, 15194, null], [15194, 16278, null], [16278, 17682, null], [17682, 19161, null], [19161, 20452, null], [20452, 21688, null], [21688, 22782, null], [22782, 23883, null], [23883, 25401, null], [25401, 27141, null], [27141, 28799, null], [28799, 30139, null], [30139, 30370, null], [30370, 32283, null], [32283, 34788, null], [34788, 35466, null], [35466, 36587, null], [36587, 36596, null], [36596, 37142, null], [37142, 37593, null], [37593, 38264, null], [38264, 39957, null], [39957, 41719, null], [41719, 43661, null], [43661, 45562, null], [45562, 46282, null], [46282, 47959, null], [47959, 50029, null], [50029, 51176, null], [51176, 52118, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52118, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52118, null]], "pdf_page_numbers": [[0, 399, 1], [399, 1891, 2], [1891, 4080, 3], [4080, 5882, 4], [5882, 5891, 5], [5891, 7112, 6], [7112, 9707, 7], [9707, 11155, 8], [11155, 12498, 9], [12498, 13918, 10], [13918, 15194, 11], [15194, 16278, 12], [16278, 17682, 13], [17682, 19161, 14], [19161, 20452, 15], [20452, 21688, 16], [21688, 22782, 17], [22782, 23883, 18], [23883, 25401, 19], [25401, 27141, 20], [27141, 28799, 21], [28799, 30139, 22], [30139, 30370, 23], [30370, 32283, 24], [32283, 34788, 25], [34788, 35466, 26], [35466, 36587, 27], [36587, 36596, 28], [36596, 37142, 29], [37142, 37593, 30], [37593, 38264, 31], [38264, 39957, 32], [39957, 41719, 33], [41719, 43661, 34], [43661, 45562, 35], [45562, 46282, 36], [46282, 47959, 37], [47959, 50029, 38], [50029, 51176, 39], [51176, 52118, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52118, 0.05369]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
81ab5d5de40f7a4f365ccd774e479a7ef6714bf8
[REMOVED]
{"Source-Url": "http://oro.open.ac.uk/36021/1/76490321.pdf", "len_cl100k_base": 8905, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 46915, "total-output-tokens": 11108, "length": "2e13", "weborganizer": {"__label__adult": 0.00038313865661621094, "__label__art_design": 0.0006580352783203125, "__label__crime_law": 0.00047397613525390625, "__label__education_jobs": 0.0034046173095703125, "__label__entertainment": 0.00020706653594970703, "__label__fashion_beauty": 0.000255584716796875, "__label__finance_business": 0.0006737709045410156, "__label__food_dining": 0.0003919601440429687, "__label__games": 0.0006923675537109375, "__label__hardware": 0.0008587837219238281, "__label__health": 0.0008697509765625, "__label__history": 0.0004494190216064453, "__label__home_hobbies": 0.00014138221740722656, "__label__industrial": 0.00040078163146972656, "__label__literature": 0.0011034011840820312, "__label__politics": 0.0003590583801269531, "__label__religion": 0.00048613548278808594, "__label__science_tech": 0.2186279296875, "__label__social_life": 0.0002932548522949219, "__label__software": 0.054412841796875, "__label__software_dev": 0.7138671875, "__label__sports_fitness": 0.00024330615997314453, "__label__transportation": 0.0005068778991699219, "__label__travel": 0.0002579689025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41957, 0.035]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41957, 0.27676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41957, 0.85801]], "google_gemma-3-12b-it_contains_pii": [[0, 885, false], [885, 3394, null], [3394, 5322, null], [5322, 8332, null], [8332, 11788, null], [11788, 14947, null], [14947, 16872, null], [16872, 20337, null], [20337, 23320, null], [23320, 26239, null], [26239, 29065, null], [29065, 30983, null], [30983, 32918, null], [32918, 33594, null], [33594, 35810, null], [35810, 38609, null], [38609, 41957, null]], "google_gemma-3-12b-it_is_public_document": [[0, 885, true], [885, 3394, null], [3394, 5322, null], [5322, 8332, null], [8332, 11788, null], [11788, 14947, null], [14947, 16872, null], [16872, 20337, null], [20337, 23320, null], [23320, 26239, null], [26239, 29065, null], [29065, 30983, null], [30983, 32918, null], [32918, 33594, null], [33594, 35810, null], [35810, 38609, null], [38609, 41957, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41957, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41957, null]], "pdf_page_numbers": [[0, 885, 1], [885, 3394, 2], [3394, 5322, 3], [5322, 8332, 4], [8332, 11788, 5], [11788, 14947, 6], [14947, 16872, 7], [16872, 20337, 8], [20337, 23320, 9], [23320, 26239, 10], [26239, 29065, 11], [29065, 30983, 12], [30983, 32918, 13], [32918, 33594, 14], [33594, 35810, 15], [35810, 38609, 16], [38609, 41957, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41957, 0.05806]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0f0babb256e30883cbc1c7137565024b66122dd9
Applying Web-based Networking Protocols and Software Architectures for providing adaptivity, personalization, and remotization features to Industrial Human Machine Interface Applications Alessandro Bozzon, Marco Brambilla, Piero Fraternali, Paolo Speroni, and Giovanni Toffetti Politecnico di Milano, Dipartimento di Elettronica e Informazione, Italy {bozzon, mbrambil, fraterna, paolo.speroni, toffetti}@elet.polimi.it Abstract This paper proposes an innovative use of a mix of networking standards and software implementation technologies for the design of industrial Human Machine Interface (HMI) systems. We describe how well known technologies and practices can be transferred from internet-based architectures to embedded systems. We analyze the technologies that can be fruitfully used in the implementation of HMI architectures and illustrate the design of a real industrial HMI system that exploit internet communication protocols and Web-based architectures. Several advanced features can be achieved thanks to this architecture, such as application adaptivity, interface personalization, control remotization, and multi-channel notification. Finally, we evaluate the resulting platform in terms of performance, reliability, and usability. 1. Introduction The current status of the industrial HMI (Human Machine Interface) in the field of industrial automation is characterized by a predominance of embedded low-power devices, typically comprising LCD touch-screens and/or keyboards, that are interfaced with proprietary or standard field buses, specifically devised for the industrial plant monitoring and automation. Commercial systems typically rely on proprietary architectures for the hardware and the operating systems, the I/O interface, the communication protocols implementation, the graphic display management, and the business logics. This situation is largely due to the strong focus on costs, performances and reliability, which largely overcomes the interest in standard architectures and high quality of interfaces and services. Moreover, industrial automation communication protocols have not reached the same level of standardization as office communication networks, which further justifies the predominance of proprietary architectures. Even if we consider only the networking standards, the industrial field can feature hundreds of would-be standard protocols, without any clearly predominant solution. However, the success of the Internet and of the Web has started impacting the industrial HMI world too. Industrial users are starting to familiarize with Web interfaces, graphical quality, multimedia content, and features such as mobility, adaptivity, and personalization of the applications. At the same time, TCP-IP based communication protocols and embedded operating systems have started to spread in the industrial automation field, thus reducing the need of proprietary architectures and makes enterprise-wide integration more appealing. In this scenario, it is easy to foresee a slow but inexorable convergence of the industrial HMI solutions towards standard architectures, standard communication protocols, and advanced interactive functions. Our work focuses on the design of a new distributed software architecture for HMI systems able to provide features and services such as personalization, adaptivity, distribution, mobility, multi-channel notification, integration with office networks and software packages, although preserving the robustness, reliability, performance and cost-effectiveness of traditional HMI solutions. The project, called ESA-MyHMI is a research activity carried out in collaboration between Politecnico di Milano and ESA Elettronica S.p.A., an Italian company operating in the HMI market. The project has lead to a novel HMI architecture, which leverages the most advanced architectural patterns of multi-tier Web applications to deploy sophisticated HMI functionalities on top of low-cost, industrial-class, embedded hardware. The paper is organized as follows: Section 2 presents the current status of the HMI market; Section 3 states the requirements of the myHMI project; Section 4 discusses the major design decisions involved in the architecture specification; Section 5 focuses on the configuration of the HMI projects; Section 6 presents the implementation experience; Section 7 evaluates the results of the performance tests; Section 8 reviews the related work; and finally Section 9 draws the conclusions and highlights the future work. 2. Overview of the industrial HMI market The HMI market is intrinsically a slowly moving world. Most of the market offer and request is still related low-end product categories, that are very well established in the market and feature only basic technical and functional solutions, sometimes implemented by obsolete technologies and approaches. Industrial HMI products rarely implement innovative services, such as remote access to the plant control, messaging and remote notification. Indeed, HMI companies seem to privilege exclusively performance and good access to industrial communication standards, even if these factors could be incompatible with the adoption of innovative solution based on modern and solid Web architectures. Even the HMI players that seem to offer the most innovative contents (and claim their products as Web-enabled) still leverage on legacy architectures, typically exploiting monolithic applications that provide poor flexibility and personalization features to system architects and users. Products tagged as “Web-enabled” in current HMI market that simply provide, as a matter of fact, applications natively available in the Windows CE or Windows XP Embedded operating systems (e.g., Internet Explorer, HTTPD web server, mail client, Messenger, and so on) without any real additional value. On the contrary, recent studies [7] show how users are increasingly looking towards a new range of products with advanced features, superior graphical capabilities and improved usability that could grant: - remote and, possibly, distributed control of an industrial plant, by providing all those characteristics that directly or indirectly enable a system to be remotely controlled through the use of network services (thin browser’s plug-ins, PDA/Smart etc. etc.); - novel remote notification solutions for the occurrence of an event or of an alarm even when the user is not in front of the terminal; - personalization and automatic adaptation of the GUI to allow users to customize the displayed information (e.g., alarms, screen data, widgets, and so on) and the graphic properties of the interface (e.g., colors); - integration with existing enterprise processes, systems and equipments (e.g. IT infrastructure, fieldbus architecture etc.etc.); - openness to new standard and best practises in the field, by offering low cost modularity and extensibility. SCADA (Supervisory Control And Data Acquisition) systems recently introduced some interesting innovations but, as the acronym suggests, their target is focused on products that implement a wide range of high-level functionalities and that can be deployed in a large set of contexts. They are typically used for human-machine interfaces to be deployed on high-profile devices (PCs and powerful embedded systems) and represent a niche in the HMI market. In the other market’s sectors, innovation has been lead by main vendors (e.g., Siemens), who have been working for the past few years in raising the level of the features provided by traditional HMI applications. Sm@rtAccess [20], for example, is a technology developed by Siemens that allows distributing the control of an industrial plant over a maximum of three stations. Its functioning, though, it is based on Sm@rt devices that simply broadcast the displayed image of the apparatus that is directly connected with the plant to the others clients. As a drawback, a similar approach bursts the amount of required transfer bandwidth, overcoming the capability of a typical Internet connection; moreover, Sm@rt technology simply exploits a Windows CE built-in application, available also on competitors’ products, which means that the technical added value is very low. Finally, its benefit can be exploited only for industrial plants having several Sm@rtAccess-ready HMI stations. Progea [21] proposes a more innovative solution by offering remotization features and a Web-based architecture. Running the Progea server application on a Windows XP based PC, it is possible to remotely control a plant from an internet connected standard Web browser that have the support of a JVM (Java Virtual Machine). Even if powerful, this solution lacks in offering a portable solution since, as Progea reports in its website, “because Java engine is not so reliable on WinCE, Progea has developed a special Client component, to ensure access to the Server also from WinCE stations”, thus offering different implementations for different platforms. 3. Requirements for novel HMI solutions The market of industrial HMI is seeing a slow but steady evolution towards the integration of industrial automation terminals with software and hardware architectures typical of office and Web-based applications, to achieve greater usability and flexibility of the interface and easier interoperability between industrial automation solutions and enterprise information systems. This goal requires unbundling the functions and modules of a traditional HMI solution, deploying them over a modular and distributed system, which exploits the open standards of the Internet and the architectural patterns of multi-tier Web applications. The myHMI project aims at designing, implementing and evaluating a distributed HMI platform enabling the definition of remotizable, personalized, and adaptable multi-device human-machine interfaces, which can be seamlessly accessed both locally and remotely and can be easily integrated in the enterprise ICT infrastructure. Table 1. Functional Requirements. <table> <thead> <tr> <th>Functional Requirements</th> </tr> </thead> <tbody> <tr> <td>Dynamic configuration</td> </tr> <tr> <td>The organization and appearance of the HMI should not be hard-wired, but dynamically configurable in terms of number and type of the controlled variables, layout of the pages, displayed data, and so forth.</td> </tr> <tr> <td>User login and access control</td> </tr> <tr> <td>Users should be identified univocally, and granted access to the system based on a successful authentication. Access control should include page access and single object interaction control.</td> </tr> <tr> <td>Personalization</td> </tr> <tr> <td>The user should be able to customize (pieces of) the displayed information and the graphic properties of the interface, and save his preferences in a profile.</td> </tr> <tr> <td>Interface adaptation</td> </tr> <tr> <td>The user interface should adapt itself to fit the screen of heterogeneous devices by means of declarative rules.</td> </tr> <tr> <td>Alarms management policies</td> </tr> <tr> <td>The system should provide mechanisms for the notification of the alarms to the user, according to specific policies.</td> </tr> <tr> <td>Functional restriction</td> </tr> <tr> <td>The producer of the HMI system should be able to disable selected functions on specific terminals, for tuning the features on the product commercial value.</td> </tr> <tr> <td>Reporting</td> </tr> <tr> <td>Reports from log data should be produced in different formats, to allow remote visualization, dispatching and printing.</td> </tr> </tbody> </table> 4. The MyHMI architecture In this section we overview the main characteristics of the design of the MyHMI framework. The overall architecture of the myHMI platform is illustrated in Figure 1: the HMI functionality, usually embedded within the terminal attached to the controlled system, becomes partitioned into a client-server architecture, implemented on top of a hybrid communication network, comprising an Ethernet backbone that connects the HMI devices and a set of field bus protocols for connecting to the controlled plant. Table 2. Non-Functional Requirements. <table> <thead> <tr> <th>Non-Functional Requirements</th> </tr> </thead> <tbody> <tr> <td>Network topology</td> </tr> <tr> <td>The system should support a network architecture for standalone, local (LAN), fixed remote (wired Web) and mobile remote (Wireless) access. Client-server communication should exploit the HTTP protocol, for firewall compatibility.</td> </tr> <tr> <td>Software architecture</td> </tr> <tr> <td>The software should be based on standard operating systems (i.e., both Linux and MS Windows). The client application should run in a standard Web browser and should automatically scale on different screen resolutions; the server application should exploit a standard dynamic Web architecture.</td> </tr> <tr> <td>Presentation</td> </tr> <tr> <td>The interface should exploit device-independent rendition technologies (i.e., XHTML, SVG, Flash).</td> </tr> <tr> <td>Performance</td> </tr> <tr> <td>Performance of page data refresh should be comparable to standalone HMI systems (10 data refreshes per second). Furthermore, idle time after login should not exceed 3 seconds and the delay after a page switch should be less than 1 second.</td> </tr> </tbody> </table> 4.1. General design choices The design of the system had to address several issues, according to the requirements. In this section we summarize the issues and the adopted solutions. 4.1.1. Distribution model of presentation and business logic. The architecture design has been addressed by applying the state of the art solutions for granting separation of concerns and modular implementation. The possible solutions could be summarized in the following approaches: 1. **Monolithic architecture**: in this solution the whole application resides on the same device and is composed of modules that cannot work separately. This is the solution adopted by the traditional HMI applications. Its main advantage is the low consumption of resources. In contrast, this solution can provide only a low level of personalization, remotization and device independence. 2. **Pure web architecture**: this solution offers the typical features of a web application. This means that client and server features are separated and several clients can access the same server. The main drawback of this solution is the low quality of the interface and of the user interaction at client side (typical of traditional Web applications). Moreover, most of the computation is delegated to the server, included the preparation of the GUI and the management of the user’s input. 3. **Rich web interface**: this solution consists of extending the client side of traditional web architectures, thus moving some of the computation from the server side to the client side, which can be classified as a smart client interface. The business layer is still located at server side and contains the control policies (e.g., the interface adaptation and personalization rules, the alarm management and data logging policies, etc), while the presentation layer is implemented at the client-side. It is responsible of building the interface for the human supervisor and of managing the user interaction. The client side can leverage powerful rendering and interface technologies, such as Macromedia FLASH, SVG, Laszlo, or others. According to the requirements, we decided to adopt solution 3. The abovementioned modularization of the architecture allows great flexibility, but also poses severe challenges to the overall performance. Several re-design cycles have been performed to guarantee that the partition of responsibility between the client and the server fulfilled the performance requirements and provided the best interaction capabilities. 4.1.2. Personalization solutions. One of the most challenging requirements advocated for strong personalization capabilities of the platform. In order to achieve this, three possible solutions could be exploited: 1. **Separate applications**: the more rude solution is to deploy a separate application for each user or group of users. This approach is not viable in terms of performances and modularity of the design, since it requires several processes to be running at the same time. 2. **Totally dynamic configuration**: this represents the opposite extreme in the spectrum of the solutions. It provides a unique implementation, which is so general that all the possible personalization rules are applied at runtime, once the user is recognized. This is a very powerful and flexible approach, however it encompasses huge resource consumption, in terms of computation power. Unfortunately, embedded systems cannot afford this kind of approach. 3. **Personalization based on groups**: an intermediate solution consists in providing a unique application with full-fledged personalization based on user groups. This approach assumes that users can be classified in groups/roles, thus factoring out most of the personalization rules at the level of the groups instead of the level of the single user. The remaining detailed personalization rules can be applied on the single user, but we may assume that their number and complexity are very limited. In this way, the application could exploit a set of pre-determined version of the interface based on groups, and then apply a small number of user-specific personalization rules. The last solution implements a good trade-off between the needs of fine-grained personalization and of reducing the complexity of the computation. 4.1.3. Configuration. The application can be configured by means of a set of descriptors, which are edited offline by means of a Configuration Editor and uploaded to the server. The deployment descriptor contains all the details of the HMI project, including the interface layout, the list of controlled variables, the data to be visualized on each screen, the user access rights, the alarm management policies, and so forth. The configuration management can be implemented in two ways: 1. **Configuration files**: the HMI platform exploits a generic binary code that is configured at runtime by reading a set of XML descriptors containing the configuration rules for the specific application. This brings the advantage of having a unique binary code and easily readable and modifiable configuration files. On the other hand, this solution requires some additional computation and delays. due to the file access and the application of the rules. 2. **Configured binary code**: this solution assumes to compile running code already configured for each specific application. This means that the offline Configurator Tool is in charge of building specific executable code every time the configuration process occurs. This allows to produce a much more efficient binary code for the HMI platform. The adopted solution comprises a mix of the previous options: the final architecture provides a server side application implemented according to solution 1, which means that the server side application exploits a unique binary code plus a set of XML configuration files. The client side part of the architecture is compiled in a unique running code, comprising several modules that embed also the configuration rules (page structure, adaptivity, personalization). The generated code is devised to be executed within browsers on any platform (embedded system, MS Windows PCs, Linux systems, and so on). This allows very efficient clients (whose code can be easily compiled offline thanks to existing technologies like Laszlo, that can compile client-side code starting from declarative specifications) and good server side performances (the access to XML files by server side programs is reasonably fast, and avoids the burden of building procedural code and recompiling it at configuration time). 4.1.4. **Connectivity.** The communication between client and server could be implemented by means of different protocols and techniques: 1. **TCP-IP bidirectional communication**: based on the management of sockets and allowing any kind of communication between the parties, without exploiting any web-specific behavior; 2. **HTTP interactions**: based on standard HTTP protocol and implementing client update by means of polling-like requests by the client to the server; 3. **HTTP with (emulated) callback**: implementing a simulated callback by the server to the client on data updates and other events, alarms, and so on. The adopted solution is the emulated callback implemented through the HTTP request-response cycle. The client submits requests upon user’s interaction and upon timeouts generated by its internal clock. Requests submitted to the server are left pending until an actual update on the status of the controlled system happens. In this case, the server sends a response to the client, thus simulating an event-based message sending. Upon reception of the request, the client submits another request. Notice that the server response contains only the field data and the updated configuration parameters, whereas all the other aspects of the page remain cached at the client and need not be exchanged. **4.1.5. Personalization enactment.** Some specific decisions must be taken about how and where to apply the personalization and adaptation rules on the interfaces. Personalization and adaptivity rules could be stored and managed with two approaches: 1. **Rules encoded as XML files**: personalization rules are generated by the offline configurator tool in XML format. Such rules are parsed and interpreted at runtime by a general purpose code, that generates the expected interface. A variant of the approach could devise several specific components for parsing the personalization and adaptivity rules addressing different issues (e.g., user interface, alarm configuration, and so on); 2. **Rules embedded in the source code**: this solution represents a hybrid approach, in which the rules are entirely applied at the server side and the client receives the information and the application structure description ready for the specific context (user and device); The options about where to apply the rules are: 1. **Server side only rule calculations**: the rules are entirely applied at the server side and the client receives the information and integrates it according to the context and the rules (that are sent as well); 2. **Client side only rule calculations**: this solution represents a hybrid approach, in which part of the rules are applied at server side and the remaining ones at client side. In our implementation, we decided for the third solution. We apply at client side the rules that affect the user interface and, in general, the client side issues. For this part, the rules have been stored as binary code in the client application, for performance reasons. Vice versa, we adopted server-side application of the rules concerning the server configuration. In this case, the rules have been encoded as XML files and parsed by the server components. This decision was mandatory since it was not allowed to generate the binary code for the rules to be applied at server side, because this would have needed to embed a C++ compiler in the configurator tool. 4.1.6. Access to field variables. The controlled system status can be made visible to the HMI system at different layers: 1. Directly to the client: the client checks the status directly on the field bus. This solution has the big disadvantage of losing the standard web configuration of the interactions; 2. Through a single centralized server: all the clients invoke a central server that acts as a gateway and provides the data about the plant status. This solution is acceptable for small systems whose status can be checked by one single entry point; 3. Through a server or a proxy: every client always invoke the same server, but several servers can be located on the plant to access different parts of the controlled system. The server invoked by the client may act as a proxy, by asking the needed information to the origin server that actually has access to the required field data. We adopt solution 2 for simple single-server architecture, while we move to solution 3 in case of complex configurations. In any case, we chose to avoid clients that could directly invoke the field or several servers at the same time. 4.2. Design of the server In the proposed MyHMI architecture, the server assumes the role of broker between the HMI interfaces and other servers that communicate on TCP/IP networks, and field buses connecting heterogeneous devices, possibly communicating through proprietary protocols. The server manages the connection to the field (via an OPC server module [1] or similar interfaces) and buffers the field data (in a data server module) to be delivered to the clients on the TCP/IP connection. Clients can be deployed in two configurations: locally at the server’s node (thus offering an integrated terminal interface) or remotely on a separate terminal connected to the server by means of a TCP/IP network. The server manages three types of client requests: initialization requests, new page requests, data refresh requests, and event-triggered executions. Initialization and new page requests may require the computation of server-side personalization rules (typically those involving alarm management), which are processed by the server based on the identity of the requesting terminal and user; page data refresh requests involve only the shipping of raw data to the client and are served faster. As depicted in Figure 2, despite such a variety of involved actors, the server identifies the boundary between two major classes of components. On one side there is the controlled system, composed by different devices, communicating both through industrial (e.g. Modbus, Fieldbus etc…) and web protocols (e.g. TCP/IP) and conveying data originating from the controlled environment; on the other side there are users, interacting with the controlled system with the support of a client user interface. Acting as a broker, the server has to deal with a lot of challenging tasks like: (i) managing and coordinating the data flow between the involved actors, possibly performing ad-hoc data manipulation and aggregation; (ii) guaranteeing the synchronization of the status information at the different peers; and (iii) offering a secure and reliable service by ensuring fail-safe execution of user commands. The server’s internal organization has been conceived to enhance modularity, extensibility, component re-use, and performance. In Figure 3 and Figure 4, we can identify three macro components of the server internal architecture: 1. the Field Interface Management (FIM); 2. the Control Interface Management (CIM); 3. the User Interface Management (UIM). The FIM comprises all the server sub-components responsible for managing the communication with the field devices, while providing the abstraction and modularity required from other components to ignore the physical features, topologies, and protocols of the devices. Among these components, the most important is the Data Server: its responsibility is to control the system components used for all the input and output operations that have to be performed with field devices. Interaction with the field is accomplished through a standard OPC client/server module, thus adding another level of abstraction (and modularity) to the system. The CIM handles all the features related to user-command management, content personalization, and adaptivity. Since multiple users are allowed to interact with the system and user interface’s contents are directly related to user devices and authorizations, there is a need for ad-hoc data structures and operations able to comply with the performance, scalability, concurrency and reliability requirements. In order to respond to such demands, the internal organization of the CIM relies on orthogonal modules (included in the brokering control macro component) responsible for managing the communication with the FIM, each one dedicated to a single aspect. The synchronization module takes care of granting correct concurrent execution of operations performed by different clients over shared objects (like field variables or alarm states) avoiding inconsistencies and, where needed, overcoming communication problems eventually leading to unstable system states. Working in collaboration with the synchronization module, the system actions component (Figure 4) provides an interface for the execution of commands over the system, by offering the whole set of operations made available by the system architect, like setting the value of a field variable, or acknowledging an active alarm. The adaptation module, instead, is responsible for applying the personalization rules to the data to be dispatched to clients. Complementary to the adaptation module, the communication buffers component (Figure 4) responds to the requirements of performance required by MyHMI to perform multiple update of the system state to clients. Every time a client connects to the system, a dedicated communication buffer is assigned to it and is initially filled with all the values needed to build up the current system state view for the client. When the system produces a new event (like a new value, an alarm, and so on), the event instance is processed by the adaptation module and, if pertinent to the user personalization rules, is queued inside the buffers. The buffer, thanks to a publisher/subscriber registration pattern, notifies the upper levels of the architecture about the presence of new values. If possible, the upper level components update their state by retrieving (and flushing) their buffer. Thus, a punctual filtering of all the information directed to clients is applied at the source, thus reducing the amount of data managed by the system and, hence, improving performances. Client adaptation is also achieved by dimensioning the buffer size (and the buffer frame) dependently on the client device performance (this information is provided by clients during their connection). Moreover, to grant communication reliability, a buffer provides a verification mechanism over sent values by implementing a simple yet effective CRC control which checks, for every communication cycle (i.e., when clients empty their buffer) if the previous communication went right. In case of failure, the server sends back again the same information. In addition, the CIM implements several advanced functions like multiple logging subsystems, possibly located on other network nodes (for example, dedicated PCs equipped with database systems), and multiple notification subsystems, allowing operators to receive notifications via short text messages on cellular phones, e-mail, or even instant messaging platforms. Again, the CIM is independent from both the field components and the user components, allowing re-using the same approach over different architectures both distributed (like Web Services or RPC call) and monolithic. Finally, the UIM is the component delegated to orchestrate and synchronize the interaction with clients; since MyHMI relies on a Web architecture, the UIM is organized according to the Model View Controller (MVC) design pattern [5], in its Web- compatible version known as Model 2 (MVC2) [6]. The MVC2 pattern separates the three essential functions of an interactive application: the business logic (the model, in our case the CIM), the interface shipped to the client application (the view) and the control of the interaction triggered by the user (the controller). The emitters of requests are clients. When a user executes an action on the user interface (like pushing a button or acknowledging an alarm), an HTTP request is sent to the server and caught by the UIM (controller), which in turn decides the course of action necessary to serve each request between: (i) executing an operation over the system or (ii) retrieving the updated system state. After completion, the CIM communicates the outcome of its execution to the UIM, which decides whether to execute other actions or to invoke a View component, responsible for formatting the results to send back to the client. 4.3. Client-server interaction design An important aspect to consider when dealing with Web architectures is the asymmetric nature of the communication protocol used by clients to communicate with the server (HTTP). Only clients are able to perform requests to the server and not vice versa. This constraint hampers the optimization of the interactions because clients cannot be notified by the server of new events (new variable values, instructions, alarms), but need to periodically invoke the server to retrieve the updated information. Modern Web applications start to discover the usefulness of bidirectional communication mechanism and leverage on push technologies to enable Web server proactivity. According to the decision described in Section 4.1.4, we adopted a simulated callback approach. Such approaches usually rely on HTTP/1.1 persistent connection (see [22], [24], [25]). Persistent connections can now exploit the XMLHttpRequest concept [23] (originally developed by Microsoft as part of Outlook Web Access, a W3C working draft) and similar mechanisms to retrieve information from the server without necessarily updating the whole page. Thanks to these technologies, clients are allowed to establish a single connection always available for servers to send data independently from the user interaction. Unfortunately, persistent connections are expensive to manage for servers, making such approach unsuitable for low-computational-power devices like the ones used by MyHMI. The communication buffer mechanism implemented in the CIM helps at overcoming the drawbacks coming from the polling process. 4.4. Client-side design The main role of the client layer in the MyHMI architecture is to manage the data presentation and user interaction. Since we want our architecture to be independent from specific technologies, we produced a high-level design, which can be implemented in different rendering environments. The designed internal component distribution is depicted in Figure 6: the client application incorporates an application shell, executed within the browser environment. The shell (written in a client-side scripting language) is separate from the graphic engine of the browser and exploits a Model-View organization. The Model contains the business objects of the interface (e.g., data variables, trend monitors) while the View comprises the widgets and presentation properties managed by the rendition technology (e.g., the widget to be used to display a data variable, a trend, or an input control). The shell is responsible for managing the client application initialization by composing the user... interface according with the information received from the server at the login or at the request of a GUI page; the initialization procedure builds a list of the model objects and page widgets and registers them as listeners to their relevant data variables, according to the building rules for the page (that include personalization and adaptation). According to the discussion of Section 4.1.5, client side rules are stored in a binary format. The shell also manages an internal clock, used to automatically trigger requests to the server for the system state refresh; upon reception of server responses, the shell updates the internal data variables, which automatically refresh the registered business objects and associated widgets. This data-centric approach allows to redraw only the affected widgets, minimizing the computational effort and enhancing performances. The shell also manages an internal clock, used to automatically trigger requests to the server for the system state refresh; upon reception of server responses, the shell updates the internal data variables, which automatically refresh the registered business objects and associated widgets. This data-centric approach allows to redraw only the affected widgets, minimizing the computational effort and enhancing performances. 5. System Configuration We devised a highly configurable system where most of the components are configured during the setup stage by means of deployment descriptors, produced by an offline visual Configuration Editing tool. This section aims at highlighting the configuration mechanisms and solutions. 5.1. Server Configuration Since the server stands between the controlled environment and the users, its configuration has necessarily to deal with both the users and the plant. On one hand, the server collects all the information related to: (i) the number and type of all the field variables; (ii) the list of alarms the system should raise when a field variable assumes particular values; (iii) the topology of the controlled system, by having some references to distributed devices (like physical addresses, IP addresses...); (iv) the classes of client devices allowed to communicate with the server, as well as information related to their physical characteristics. On the other hand, the server also needs to be aware of information related to the organization of user interfaces, user authentication, user personalization, and so on. The deployment descriptors containing such information are encoded as XML files [2], and are processed by the server using an event-based parser (more precisely, a SAX parser) to avoid any unnecessary object creation overhead during the server initialization step. Even if this approach resulted to be suitable in terms of performance, the presence of a large number of configuration files (about one hundred files for a typical middle-size project) suggested us to find an alternative way to retrieve information while keeping XML as reference file format. We found suitable an approach based on plain text index files associated with each XML file: such indexes contain absolute references to XML tags inside the XML documents, allowing fast sequential readings, thus improving performances. The most complex configuration aspects are involved in the content personalization process: the CIM’s adaptation module described in Section 4.2 needs to be fed by information regarding the pages currently displayed by the client, about visual objects related to field variables (e.g. trend graphs, gauges or simple text fields), to alarms and, in general, to the entire set of variable managed by the server. However, due to personalization and adaptivity rules, the properties of the objects placed inside a page can change, depending on the devices (screen size and computation power) and on different authentication levels of the users. Of course, personalization and adaptivity rules could apply simultaneously, thus producing an actual number of rules equal to their Cartesian product. In order to keep minimal the total amount of information processed by the server, we decided to implement a two-step personalization algorithm. During the system design, the configuration editing tool produces: 1. an XML descriptor file containing, for each page, the information associated with the most “general” page configuration (i.e., when the page is displayed on the most powerful device from a user having the highest authentication level). 2. a “delta” xml file for each allowed client device that contains, for each user role, only information about the objects to modify or remove from the page. When a user logs in the system, information about its device and its credential are collected; on a page request, personalization and adaptivity “delta” rules are applied on the general XML configuration file, thus producing the final rule set used by the CIM’s adaptation module. Since XML parsing and transforming are know to be expensive tasks, we decided to adopt a more pragmatic approach by performing memory sequential reading of XML files as if they were plain text (thanks, again, to index files) and applying directly the personalization rules originating from the “delta” transformation files. 5.2. Client Configuration While the server focus is on managing all the information related to the business logic of the system, the client configuration file should contain parameters and instructions about (i) how to populate every single page of the user interface and (ii) about how to communicate with the server (and what to communicate). The configuration of the client leverages a client-side configuration file, which is downloaded at each user login and at each request for a new page. It contains all the relevant client-side personalization rules. The information needed by the client for building the user interface includes: (i) the list of all the objects composing the user interface as well as their graphical properties; (ii) the identifier of the graphical skin used by the client instances; (iii) the page to display when a user logs in; (iv) for a logged user, the personalization rules to apply to each GUI page; (v) for a logged user, the default interface language. The amount and variety of data needed by the client requires optimized approaches in order to avoid redundancies and to reduce the page loading time. The application of adaptation and personalization features is a resources-consuming task. The chosen approach heavily leverages on the Configuration Editing Tool which, at compile time, creates $n$ different version of the binary client application, each one tailored on a particular couple of user and device classes: when a user logs-in, its authentication information are communicated to the client application, which retrieves the proper pre-compiled user interface modules from the server. Information about the interaction between the client and the server (like the format of the request to perform) are also pre-compiled inside the downloaded modules. Language data are downloaded and applied on demand when (and if) the user requires a language change with respect to the default language of the project. This approach has the drawback of introducing a lot of pre-compiled client modules, but dramatically reduces the amount of time needed at run-time to adapt the GUI to the user and the device, thus enhancing the reactivity of the system. 6. Implementation This section reports our experience and evaluations in implementing a running prototype of the proposed MyHMI solution. Thanks to an open-source Modbus simulator, we simulated the logic and data flows of a milk bottling plant, composed of about twenty controlled variables, such as the liquid level and temperature of the milk tank, the state and speed of the automatic conveyor belt, and so on. 6.1. Server-side implementation The first implementation choice was about the operating system to adopt and the possible technologies that could be associated to it. We evaluated both Linux-based and MS Windows-based operating systems, and finally we decided to use Microsoft Windows CE 4.2 [4] and its built-in technologies: Web publishing architecture rely on the ISAPI+HTTPD daemon, all the model and business logic components have been developed as Microsoft COM objects, and the controller part of our MVC architecture has been implemented as an ISAPI DLL. The choice of leveraging on built-in Windows CE technologies has been mainly driven by performance considerations since alternatives approaches (as embedded Java-based Web servers) resulted to be excessively resource consuming. For the same reasons, we discarded advanced solutions like ASP, .NET or Web Services technologies. Some of them were not available on embedded devices, while others produced unsatisfactory results. The connection with the field has been achieved thanks to a run-time component (called OPC client) based on the OPC Data Access Custom Interface Specification 3.0, coupled with an OPC Server version 2.5.15. The OPC client interfaces the Data Server component to the OPC server, which in turn manages the field connections. OPC Client/Server connection is open at the start-up of the system and it is kept alive until the server releases the resources before being switched-off. Thanks to the OPC architecture, we could raise the level of abstraction of the communication to the field, thus developing a prototype compatible with almost all the industrial field protocols. However, in future optimized implementations, we are planning to remove the OPC layers, in order to achieve better performances with low-power devices. 6.2. Client-side implementation The implementation strategy for the client side has been heavily affected by the constraints imposed from the target environment of the MyHMI project. Accordingly to the choices made for the server side implementation, we choose to exploit the Windows CE native browser, and to extend it with plugins for visual rendering. Many of the most innovative and powerful technologies on the market has been reviewed, considering features like: (i) Availability, flexibility, and portability of the solution for a wide set of operating systems and HMI terminals; (ii) Vector graphics support for easy adaptation to different screen resolutions; (iii) availability of scripting languages for software personalization; and (iv) interactivity and usability features. ![Fig. 7. MyHMI Flash interface of the prototype.](image) We considered technologies such as SVG [3], Adobe Flash, and Microsoft-based solutions. At the end of our evaluations, Adobe Flash [16] resulted to be the best candidate for the MyHMI client application. The developed client-side prototypes include all the widgets needed to display plant values (e.g., gauge bars, alarm leds, conveyor belts, tachometers, meters, and so on) and to execute user commands (e.g., start/restart and emergency push buttons, input fields, and so on). Widgets have been designed according to the separation of concerns philosophy: every widget is designed separately, with an approach based on the well known concept of skin. Thanks to the new widget design and to the sophisticated rendering engine provided by Flash, the prototype was able to implement much more refined interfaces than usual HMI systems, both in terms of graphical appearance and of business logics (for instance, see the real time conveyor emulator in Figure 7). According to the decision presented in Section 4.1.5, the implementation strategy for the client run-time content adaptation and personalization exploited configuration rules compiled as binary code compiled at configuration time by the offline Configuration Tool. For compatibility with the Flash rendering engine, rules have been expressed as ActionScript code. Among the existing compilers of ActionScript, we choose MTASC [19], which provides binary code with better performances with respect to the... competitors (see Section 7.2). MTASC is an open source project able to generate Flash SWF bytecode without relying on Adobe Flash components or other tools. 7. Evaluation From a functional standpoint, our prototype covered most of the requirements presented in Section 3. The proposed architecture allows local, fixed-remote, and mobile-remote devices to effectively interact with server via standard HTTP connections. The adaptation subsystem allows a fine-grained specification of users and user groups, as well as content personalization and interface adaptation features. The deployment device of choice for performance evaluation of our prototype implementation in real world scenario is an industrial panel equipped with a ViaX86 400MHz processor, 64MB of FLASH drive, 128MB of RAM, on Windows CE.Net 4.2. 7.1. Server-side evaluation Tests performed over the server side of the MyHMI solution focused on verifying its ability to sustain multiple clients with different rendering technologies without an excessive degradation of the performances. The prototype application was able to score an average value of 6.3 requests/sec with 6 clients connected simultaneously. The client implementation technology was equally distributed between SVG and Flash (3 SVG clients and 3 Flash clients). We also tested the performances when the client and the server reside on the same embedded device. The test revealed that MyHMI was able to perform similarly to the existing dedicated and monolithic ESA applications. One of the most critical issues able to dramatically degrade the performance of the server is related to data format used for storing and managing persistent information. Modern Web technologies are heavily based on relational databases and XML documents for information gathering and exchange; however, a set of tests executed over the target hardware configuration showed that plain text file storage is still the best option for this class of devices. As reported in Table 1 and Figure 8, we performed a detailed test using different data storage technologies according to the following conditions: - For text files, read/write operations have been tested over a TXT file of 520 bytes; 1, 10 and 40 float values were processed respectively for the low, medium and high load. - For XML files, low, medium and high load tests have been conducted with respectively 4, 14 and 20 Kbyte files containing 10, 40 and 100 complex objects (DOM and SAX parsers of Windows CE). - For database technologies we tested Sql Server CE and SqLite by performing low, medium and high workload test with read/write operations respectively on 200 tuples with 10 columns each, 600 tuples, and 1000 tuples, both with strings and long integers values. ### Table 1. Performance evaluation comparison for data storage technologies (ms). <table> <thead> <tr> <th>Tech.</th> <th>Oper.</th> <th>Low load</th> <th>Medium load</th> <th>High load</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>RAM</td> <td>Flash</td> <td>RAM</td> </tr> <tr> <td>XML</td> <td>Read</td> <td>5.3</td> <td>5.5</td> <td>16.1</td> </tr> <tr> <td></td> <td>SAX</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>Read</td> <td>59.2</td> <td>469.8</td> <td>80.4</td> </tr> <tr> <td></td> <td>DOM</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>Write</td> <td>57.1</td> <td>742.4</td> <td>87.8</td> </tr> <tr> <td></td> <td>DOM</td> <td></td> <td></td> <td></td> </tr> <tr> <td>SQL CE</td> <td>Read</td> <td>148.8</td> <td>143.5</td> <td>377.1</td> </tr> <tr> <td></td> <td>Write</td> <td>46</td> <td>235.7</td> <td>46.4</td> </tr> <tr> <td>SQ Lt</td> <td>Read</td> <td>16</td> <td>19.1</td> <td>28.4</td> </tr> <tr> <td></td> <td>Write</td> <td>13.5</td> <td>525.2</td> <td>13.4</td> </tr> <tr> <td>Text</td> <td>Read</td> <td>0-1</td> <td>0-1</td> <td>0-1</td> </tr> <tr> <td></td> <td>Write</td> <td>0-1</td> <td>23.1</td> <td>0-1</td> </tr> </tbody> </table> Figure 8. Performance results, Table 1 data (ms). Each test has been executed over RAM memory (volatile) and FLASH memory (persistent) under the same conditions. Measured I/O time includes the time spent to open the data file, to execute reading or writing of data, and to close the file. We chose such testing condition to replicate typical workload conditions in real-world projects, in order to obtain a real picture of the performance of typical HMI devices. Comparing the results obtained by performing standard read/write operations over text files, XML files, and Embedded RDBMS solution, we discovered that plain text files could improve the overall performance of the system (both for main memory and solid read/write operations) with respect to XML files and to RDBMS technologies. Likewise, our experiments showed that architectures based on Web services and/or XML document exchanges could worsen the performance of the server, which was in charge of building and parsing the exchanged documents. Plain text message composed by simple concatenated \textit{label:value} entries allowed to reduce the server processing time. \subsection*{7.2. Client-side evaluation} The tests performed on the client side for the chosen Flash implementation aimed at verifying: (i) the number of request per second that the client was able to send to the server; and (ii) the time needed to load a user interface page. The first test has been carried out by exchanging 20 plant data at every request cycle with a remote Web server connected with a MODBUS simulator and deployed on a standard PC (in order to avoid bottlenecks due to limited resources available on a server installed on embedded devices and to simulate a real distributed environment). Thus, we deployed on the embedded industrial panel the only the Flash application. Results obtained using standard Flash components show the Macromedia Flash application to be able of performing 3.2 requests/second using synchronous calls. Then, we manually optimized the application code, that resulted in performances almost doubled (we were able to reach up to 7.8 requests/sec). We obtained these results by removing all the Macromedia components dedicated to Web service connections which, whilst providing a convenient interface, bring heavy performance decay. The second test we performed focuses on the page change elapsed time. The most demanding issues were related to the execution of the personalization and adaptation rules. For coming to the decision described in Section 4.1.5., we tested different solutions, including both compile-time and run-time user interface adaptation. - The first approach uses f XML documents, dynamically generated by the server according to user and device rules to configure the client application. Such approach requires the client to dynamically build its interface by using, at run-time, the information parsed from the XML document. - The second approach, instead, overturned such perspective by applying directly at client side the adaptivity and personalization rules by means of instructions hard-coded inside the client application and produced at compile-time when generating the Flash application. Results showed that personalization rules applied client-side (written using ActionScript language and executed by the Flash files produced at compile-time) perform better than if applied by a server application written using C++ for embedded devices. In order to further improving the overall performances, we performed page-load time tests using, as compiling technologies, two frameworks freely downloadable from the Internet as open source projects: MTASC [19] and Laszlo [18]. Moreover, every tested Flash application has been executed on two identical devices, the first based on the Windows CE.Net 4.2 operating system, while the second working with Windows XP Embedded (service pack 2). Since the Flash player for Windows CE devices is not freely available from Adobe, a personalized version of the required Active-X (version: 7.0,70,3) was provided by NEC Corporation of America (NECAM) [17], running as a plug-in of Internet Explorer 6.0 for Windows CE. On Windows XP Embedded, we used the Adobe Flash player Active-X (version: 8,0,22,0) working as Internet Explorer 6.0 plug-in. | Table 2. Performance evaluation comparison of different client technologies (ms) | |-------------------------------|------------------|------------------|------------------|------------------| | Tech. | Low load WinCE | Low load XPe | Medium load WinCE| Medium load XPe | High load WinCE | High load XPe | | XML | 773 | 618 | 1472 | 1286 | 2910 | 2314 | | Laszlo | 151 | 79 | 290 | 153 | 603 | 284 | | MTASC | 87 | 58 | 153 | 107 | 276 | 192 | The low load processed page was populated by 28 widgets, including 13 static objects, 11 simple read/write widget (e.g., push button) and 4 complex read/write widget (e.g., trend graph). Medium load and high load pages have been populated respectively with duplicating and quadruplicating the number of object considered for the first test. Table 2 and Figure 9 report our results. The XML document approach resulted to be unacceptably slow both on WinCE and WinXP Embedded. Instead, we found that using the Laszlo technology the client configuration of each page run 5 times faster with respect to the XML approach, and the compiled application files, compared to the ones using XML, result to be 3 times smaller. The MTASC solution resulted even better then Laszlo and, hence, we chose it as the target client deployment platform. We also compared the results of our architecture to the current state of the art in the market. MyHMI results to perform as fast as the average of the other HMI solutions based on the same hardware, although our solution provides much more powerful features. Only HMI devices with proprietary built-in O.S. could perform better. Several research activities have gathered around the concept of Open-Architecture Controllers [13], focusing on software solutions that offer as much portability and openness to any kind of device and operating systems as possible. They aim at studying the best mix of programming languages, and development architectures and frameworks for granting such flexibility. They often foster Web technologies for their open availability and portability, but they don’t apply any effort for increasing the provided features or services. Finally, another category of works, such as [14], provide experience reports on practical application of internet-based solutions to specific environments, but do not generalize their results to a general architecture for handling any kind of HMI problems. Our contribution with respect to the current research is in the definition of a light-weight Web based software architecture that features standard technologies like browsers, Web servers, and graphic players, that can be delivered on (networks of) embedded systems, delivering state of the art Web applications, including personalization, multi-device adaptivity, and remote notification, without the need of heavy hardware platforms like office PCs. 9. Conclusion In this paper we have shown how HMI systems could benefit from the introduction of cutting-edge Web technologies and best practices. The work has been organized in four sequential steps: market and literature analysis; definition of the requirements needed by new Web enabled HMI solutions; design of a proof-of-concept case study application; and development of the client and server applications to test reliability and performances of the MyHMI solution. The market analysis highlighted the current unsolved problems affecting the HMI market, thus gathering a set of features and requirements for the next generation of HMI solutions. The result of our work consists in a highly configurable architecture which can be considered a state of the art reference for new generation of HMI solutions relying on flexibility, ubiquity and customization. Our contribution with respect to the current research is in the definition of a light-weight, Web based architecture that features standard technologies like browsers, Web servers, and graphic players, that can be deployed on (networks of) embedded systems, delivering state of the art Web applications. This architecture is able to provide 8. Related Work The requirements, trends, and opportunities of current technological evolution in embedded system are widely recognized [7]. Some early works exist that describe how technologies from the Web environment can be applied to the industrial HMI applications. [8] outlines some possible approaches on how to use XML and Java for interface definition and configuration. However, our experience highlights that applying this plain technique can be painful in terms of performance if using low power devices with embedded operating systems. Other works (e.g., [9], [11]) explore the integration of traditional field bus solutions with Ethernet based communication between clients, but, instead of proposing a full-fledged Web based architecture, they offer a gateway-based interface for transferring the information from the field to an office-like network. These approaches do not fully exploit the potentials that Web interfaces can provide (for example, in terms of richness of interfaces, adaptability and personalization), because the server-side software architecture is not structured enough for that. Moreover, they usually refer to office PC platform for running the advanced remote interfaces. Service-oriented, agent-oriented, and distributed object architectures ([10], [12]) based on Web and XML technologies have been explored too, but their results are still in an early phase of development and typically require powerful hardware. advanced features including personalization, multi- device adaptivity, and remote notification, without the need of heavy hardware platforms like office PCs. Future works will include some architecture refinements and the testing of performances of advanced features (messaging, remote logging, and so on). 10. Acknowledgment We wish to thank the staff of ESA Elettronica S.p.A. for the valuable comments, feedbacks, and collaborative work. A special thank to Stefano Longoni (R&D department), to his team, and to Mario Colombo (CEO). 11. References [2] Extensible Markup Language (XML), http://www.w3.org/XML/ [3] Scalable Vector Graphics (SVG), http://www.w3.org/Graphics/SVG/ Web Tools to Monitor and Manage Embedded Devices”, http://msdn.microsoft.com/msdnmag/issues/0500/wince/default .aspx view controller user interface paradigm in Smalltalk-80”, in Journal of Object-Oriented Programming, Volume 1 , Issue 3 industrial automation--needs and opportunities,” in Automation Architecture for HMI,” in: IEEE Conference on Systems, Man, for industrial applications,” in: IEEE Transactions on Instrumentation and Measurement, Feb 2003, Volume 52, Issue 1, pp. 165- 174. for industrial automation,” in IEEE Enterprise Distributed Object Computing Conference, 2003, pp. 315 - 320 automation,” in: IEEE Computing & Control Engineering [12] I. Seilonen, T. Pirttioja, P. Appelqvist, A. Halme, K. Koskinen, “An Approach to Process Automation Based on Cooperating Subprocess Agents,” chapter in: Holonic and Multi-Agent Systems for Manufacturing, Springer LNCS, Volume 2744, 2004. Manufacturing Systems,” Open Architecture Control Systems – J.-E. Karlsson, “Human-computer interaction in flatness control. An inraetnet-based approach,” in: IEEE Conference on Systems, Man, and Cybernetics, 1998. Volume 2, pp. 1323- 1328. automation,” in: IEEE Computing & Control Engineering [17] NECAM (NEC Corporation of America), http://www.necam.com [19] MTASC (Motion-Twin ActionScript 2 Compiler) http://www.osflash.org/mtasc [20] Siemens Sm@rtAccess technology ware/wincc-flexible-options/wincc-flex-smartaccess.htm [22] W3C.org: HTTP1.1 Persistent connection (RFC 2616 Fielding, et al.) http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html [23] W3C.org “The XMLHttpRequest Object” (W3C Working Draft) http://www.w3.org/TR/XMLHttpRequest/ http://www.laszlosystems.com/lps/docs/lzx-developers- guide/persistent_connection.html [25] Zhe Wang, Pei Cao, Persistent Connection Behavior of Popular Browsers”, http://www.cs.wisc.edu/~cao/papers/persistent- connection.html
{"Source-Url": "http://webml.elet.polimi.it/webml/upload/ent5/1/AINA_14nov_Marco_4.pdf", "len_cl100k_base": 12589, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 45408, "total-output-tokens": 13678, "length": "2e13", "weborganizer": {"__label__adult": 0.0007963180541992188, "__label__art_design": 0.0025043487548828125, "__label__crime_law": 0.0007061958312988281, "__label__education_jobs": 0.001079559326171875, "__label__entertainment": 0.00023698806762695312, "__label__fashion_beauty": 0.0003781318664550781, "__label__finance_business": 0.0008082389831542969, "__label__food_dining": 0.0007476806640625, "__label__games": 0.001129150390625, "__label__hardware": 0.021728515625, "__label__health": 0.0007781982421875, "__label__history": 0.0009016990661621094, "__label__home_hobbies": 0.0002371072769165039, "__label__industrial": 0.0231781005859375, "__label__literature": 0.0003571510314941406, "__label__politics": 0.00043654441833496094, "__label__religion": 0.0012350082397460938, "__label__science_tech": 0.340576171875, "__label__social_life": 9.125471115112303e-05, "__label__software": 0.03973388671875, "__label__software_dev": 0.55908203125, "__label__sports_fitness": 0.0004076957702636719, "__label__transportation": 0.0023212432861328125, "__label__travel": 0.0004112720489501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64819, 0.03267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64819, 0.13676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64819, 0.90744]], "google_gemma-3-12b-it_contains_pii": [[0, 4107, false], [4107, 9007, null], [9007, 13229, null], [13229, 18412, null], [18412, 23245, null], [23245, 27319, null], [27319, 31352, null], [31352, 34929, null], [34929, 37855, null], [37855, 42797, null], [42797, 46933, null], [46933, 51325, null], [51325, 56663, null], [56663, 60900, null], [60900, 64819, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4107, true], [4107, 9007, null], [9007, 13229, null], [13229, 18412, null], [18412, 23245, null], [23245, 27319, null], [27319, 31352, null], [31352, 34929, null], [34929, 37855, null], [37855, 42797, null], [42797, 46933, null], [46933, 51325, null], [51325, 56663, null], [56663, 60900, null], [60900, 64819, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64819, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64819, null]], "pdf_page_numbers": [[0, 4107, 1], [4107, 9007, 2], [9007, 13229, 3], [13229, 18412, 4], [18412, 23245, 5], [23245, 27319, 6], [27319, 31352, 7], [31352, 34929, 8], [34929, 37855, 9], [37855, 42797, 10], [42797, 46933, 11], [46933, 51325, 12], [51325, 56663, 13], [56663, 60900, 14], [60900, 64819, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64819, 0.16041]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
66cff49b444a601e1fed8411c9017470c03394bb
Distribution Agreement In presenting this thesis or dissertation as a partial fulfillment of the requirements for an advanced degree from Emory University, I hereby grant to Emory University and its agents the non-exclusive license to archive, make accessible, and display my thesis or dissertation in whole or in part in all forms of media, now or hereafter known, including display on the world wide web. I understand that I may select some access restrictions as part of the online submission of this thesis or dissertation. I retain all ownership rights to the copyright of the thesis or dissertation. I also retain the right to use in future works (such as articles or books) all or part of this thesis or dissertation. Signature: ______________________________ ________________________ Kanwei Li Date Cost Analysis of Joins in RDF Query Processing Using the TripleT Index By Kanwei Li B.S., Emory University, 2008 Mathematics and Computer Science __________________________ James J. Lu Advisor __________________________ Li Xiong Committee Member __________________________ Phillip W. Hutto Committee Member Accepted: __________________________ Lisa A. Tedesco, Ph.D Dean of the Graduate School __________________________ Date Cost Analysis of Joins in RDF Query Processing Using the TripleT Index By Kanwei Li B.S., Emory University, 2008 Advisor : James J. Lu, Ph.D. An abstract of A thesis submitted to the Faculty of the Graduate School of Emory University in partial fulfillment of the requirements for the degree of Master of Science in Mathematics and Computer Science 2009 Abstract Cost Analysis of Joins in RDF Query Processing Using the TripleT Index By Kanwei Li The Semantic Web movement has led to a growing popularity of RDF and its query languages. Clearly, good query performance is important in allowing information to be quickly retrieved from RDF datasets that are ever-increasing in size. We use the TripleT indexing scheme for RDF data as a framework to examine the cost of join operations for RDF. We analyze strategies for efficient join processing for a variety of query patterns. For queries that involve multiple join conditions, we introduce a model to predict the number of I/Os required to best order the join conditions. Experimental results validate the model using three real RDF datasets. Cost Analysis of Joins in RDF Query Processing Using the TripleT Index By Kanwei Li B.S., Emory University, 2008 Advisor : James J. Lu, Ph.D. A thesis submitted to the Faculty of the Graduate School of Emory University in partial fulfillment of the requirements for the degree of Master of Science in Mathematics and Computer Science 2009 I would like to thank Dr. James Lu for his constant guidance, patience, and editing skills that have made the writing of this thesis both possible and enjoyable. I would also like to express my gratitude to Dr. George Fletcher for introducing me to the world of RDF, and for allowing me to build upon his wonderful TripleT index. Dr. Phillip Hutto was the one who convinced me into switching majors as an undergrad. Without our frequent chats, I might never have developed the great interest in Computer Science that I have today. I am also indebted to Dr. Li Xiong, Dr. Eugene Agichtein, Dr. Ken Mandelberg, Dr. Michelangelo Grigni, and Dr. Vaidy Sunderam, who all filled my mind with questions and answers. ## Contents 1 Introduction 1.1 Research Objective 1.2 Prior Work 2 RDF and SPARQL 2.1 Background on RDF 2.1.1 Representation Formats 2.2 RDF Datasets 2.3 Datasets Used in the Thesis 2.3.1 Dataset Statistics 2.3.2 Dataset Discussion 2.4 Background on SPARQL 3 Indexing Techniques 3.1 B+ Trees 6.5 Discussion ................................. 41 7 Conclusion ................................. 43 List of Figures 2.1 XML representation of RDF ............................... 6 2.2 Notation 3 representation of RDF ......................... 7 2.3 SPARQL: Return all states and their capitals ............ 11 3.1 TripleT diagram ........................................... 17 5.1 Triples matching (a, b, ?v) \( \land (\?v, c, d) \) ............. 29 5.2 Joining Left and Right SAPs ................................. 30 5.3 Index lookup for each unique atom in the variable position 31 6.1 Join diagram .............................................. 35 7.1 SP\(^2\)Bench 5b query .................................... 45 # List of Tables 2.1 Subject, Predicate, Object statistics .......................................... 9 2.2 Join statistics ................................................................. 9 4.1 Join CPU performance ......................................................... 24 4.2 Join CPU and I/O performance (CPU measured in seconds) .......... 25 6.1 Unique atoms per position per bucket ....................................... 36 6.2 Average Subject, Predicate, Object bucket sizes ......................... 37 6.3 I/O results for DBpedia ....................................................... 38 6.4 I/O results for Uniprot ....................................................... 39 6.5 I/O results for SP2Bench .................................................... 40 6.6 I/O results for Variant Query Form ....................................... 41 Chapter 1 Introduction A main goal of the Semantic Web movement is to allow semantic interconnections between decentralized sources of data on the Web. RDF and SPARQL are formats designed for tagging and querying data, respectively. Up to now, not many websites have adopted Semantic Web practices such as RDFa tagging, Friend of a Friend, and SPARQL entry points. However, the rising popularity of online APIs and web services shows an encouraging trend towards resource-centric entry points to data instead of having to access the data through parsing HTML or through a browser. Recently, the British Broadcasting Corporation (BBC) has setup online SPARQL end-points for their TV and music databases that one can query. This adoption by a large company is encouraging to the future of the RDF, SPARQL, and the Semantic Web. For widespread adoption and high user satisfaction of using RDF datasets, SPARQL queries must run quickly. This means that it is important to have good indexing techniques to ensure scalable performance of query processing for RDF data. It is also important to have a query optimizer to help process SPARQL queries that are often very long and complicated. 1.1 Research Objective This thesis can be viewed as a continuation of the work on an RDF indexing technique called TripleT, developed in 2008 by Fletcher and Beck [2]. Using TripleT as the framework, the goal is to better understand the requirements for building an effective SPARQL query optimizer. Specifically, we study the information necessary to facilitate good join ordering. We develop a model for predicting the number of I/Os required for a join based on TripleT using statistics that are easily collected during the creation of the index. Experiments are conducted to validate the model. 1.2 Prior Work Fletcher and Beck [2] compared the characteristics and performance of TripleT to other indexing techniques used in production software, HexTree and MAP, and concluded that TripleT is conceptually simpler, more space efficient, while still providing the same level of support for SPARQL queries. Much literature exists on the subject of join algorithms. Mishra and Eich [4] authored a comprehensive survey paper in 1992 on join algorithms for relational databases. The main type of join in SPARQL is the equi-join, and Mishra and Eich noted that the nested-loop join, hash join, and sort-merge join were the most applicable and performant. Stocker et al. [9] laid the groundwork for SPARQL query optimization by defining Basic Graph Patterns (BGP) as the basic unit of SPARQL queries. Additionally, they introduced heuristics for join ordering in the cases of having pre-computed statistics and not having pre-computed statistics. The goal was to try to get smaller intermediate result sets for later joins by carrying out the most selective joins first. Neumann and Weikum [5] introduced a SPARQL query engine called RDF-3X, and discussed how it used a dynamic programming approach with selectivity histograms to estimate cost of join paths. Chapter 2 RDF and SPARQL 2.1 Background on RDF RDF is a framework that describes data in the form of (subject, predicate, object) triples. It was released as a W3C recommendation in 1999 [12] and is currently the leading description model for the Semantic Web. The greatest advantage of RDF lies in its simplicity of only using (subject, predicate, object) triples for all data representation. This creates flexibility and allows the description of both data and metadata in the same fashion. Unlike the relational model, the RDF model is “pay-as-you-go” because a predetermined schema is not required, and structure can be added later on by adding new triples to define relationships present in the data. This lack of required structure is beneficial for the World Wide Web, where there are many participants who are not in close collaboration. RDF is also a good format for ontologies as relationships can be naturally described. For example, a predicate of one triple can be the subject of another triple. The following example shows a relationship, normally considered as metadata, being described in RDF: (John, friendof, Mark), (friendof, typeof, relationship) The flexible nature of RDF allows this to be easily described, which is not the case in the relational model. The latter would require extra schema complexity for metadata and may require extra tables to achieve normal form. RDF has also been used to store data from multiple domains into one dataset. For example, it would be very difficult to store structured Wikipedia content into a relational database without many tables to store different types of data. A table to describe people would have different columns than a table to describe cars, so different tables would have to be used. The RDF model would be much more appropriate since a different set of predicates could be used for each domain. The DBpedia project [1] has adopted the RDF model and converted much of Wikipedia into an RDF dataset. RDF fits naturally into the requirements of a decentralized Semantic Web. 2.1.1 Representation Formats There are two main representation formats for RDF: XML and Notation 3. Additionally, a subset of Notation 3 called Turtle is in development. Notation 3 and Turtle are designed for human-readability and are more concise than XML, but require a proper parser instead of an XML parser. What is important is that while the representation format may differ, the data represented is the same. Figures 2.1 and 2.2 show different representations of two triples: (http://wikipedia.org/wiki/France, dc:title, France), (http://wikipedia.org/wiki/France, dc:publisher, Wikipedia) ```xml <rdf:RDF xmlns:dc="http://purl.org/dc/elements/1.1/"> <rdf:Description rdf:about="http://wikipedia.org/wiki/France"> <dc:title>France</dc:title> <dc:publisher>Wikipedia</dc:publisher> </rdf:Description> </rdf:RDF> ``` Figure 2.1: XML representation of RDF The subject of both triples is “http://wikipedia.org/wiki/France”, and XML conveys this by having the predicate be the parent node for both of the predicate/object pairs. In Notation 3, this is accomplished by the nested indentation and having a semicolon after the first triple instead of a period. RDF supports namespaces by allowing the declaration of prefixes. In the example above, the “dc” prefix is defined, and “dc:title” is thus short for “http://purl.org/dc/elements/1.1/title”. This convention allows readable name-spacing while also reducing data size. Additionally, RDF allows the assignment of types to atoms, such as integer, double, decimal, and boolean. 2.2 RDF Datasets Fundamentally, an RDF dataset is a collection of RDF triples. As such, there are many possible storage formats, but most RDF datasets are distributed in plain-text format for universal portability. Many projects that use RDF data use a relational database to store triples. by using a table with three columns: subject, predicate, and object. Triples can then be accessed using SQL. While this method leverages current relational database technology, tabular relational stores do not fit the graph-like nature of RDF, and thus new RDF-native stores have been developed. As of 2009, the primary RDF-native stores available are: 1) Jena [3], an open-source Java framework that supports SPARQL 2) Sesame [7], an open-source framework in C that uses its own querying language called SeRQL (Sesame RDF Query Language) 3) Virtuoso [11], a framework that is available in both commercial and open-source packages, both supporting SPARQL Additionally, a framework named RDF-3X [5] was introduced in 2008, and the authors state that it is available on request for non-commercial use. 2.3 Datasets Used in the Thesis It is important to perform experiments on real-world data. This thesis will use a subset of one million triples of each of: 1) The DBpedia dataset [1], an extract of Wikipedia articles converted into RDF triples. Semantic information, such as categories and infoboxes, have been kept as well. 2) The Uniprot dataset [10] that describes proteins and provides annotation data. 3) The SP$^2$Bench [8] dataset, a recently developed dataset that tries to emulate an authorship database, such as DBLP. ### 2.3.1 Dataset Statistics To get an idea of the distribution of atoms in the datasets, the number of unique atoms in the subject, predicate and object positions were computed and tabulated in Table 2.1. Additionally, the number of atoms that could possibly take part in one of the 3 possible joins between different positions were calculated and tabulated in Table 2.2. The abbreviations S, P, and O denote subject, predicate, and object respectively. <table> <thead> <tr> <th>Dataset</th> <th>Uniq Subj</th> <th>Uniq Pred</th> <th>Uniq Obj</th> </tr> </thead> <tbody> <tr> <td>DBpedia</td> <td>136108</td> <td>8878</td> <td>282199</td> </tr> <tr> <td>Uniprot</td> <td>592639</td> <td>79</td> <td>294676</td> </tr> <tr> <td>SP$^2$Bench</td> <td>31629</td> <td>61</td> <td>81919</td> </tr> </tbody> </table> Table 2.1: Subject, Predicate, Object statistics <table> <thead> <tr> <th>Dataset</th> <th>SP</th> <th>SO</th> <th>PO</th> </tr> </thead> <tbody> <tr> <td>DBpedia</td> <td>0</td> <td>48085</td> <td>0</td> </tr> <tr> <td>Uniprot</td> <td>0</td> <td>105560</td> <td>8</td> </tr> <tr> <td>SP$^2$Bench</td> <td>0</td> <td>14816</td> <td>0</td> </tr> </tbody> </table> Table 2.2: Join statistics 2.3.2 Dataset Discussion DBpedia is interesting in that it has many unique predicates, while Uniprot and SP²Bench have very few. This is because DBpedia contains multiple domains of knowledge, and each domain uses its own set of predicates. For example, predicates to describe people include “dbpedia:ontology/religion”, “dbpedia:ontology/spouse” and “dbpedia:ontology/birthdate”, which would logically only be used to describe people and not other kinds of resources. The broad scope of DBpedia means it has more unique predicates than the other two. Also, it is interesting that predicates never appear as subjects and very seldomly appear as objects. This shows that none of these datasets create an ontology where metadata is treated as data and a hierarchy is created. 2.4 Background on SPARQL SPARQL is the W3C-backed query language for RDF and became an official W3C recommendation in 2008 [13]. The language is inspired by SQL and adopts many of its keywords, such as SELECT, DISTINCT, FROM, and WHERE. Because of RDF’s graph-like nature, additional keywords are necessary, such as PREFIX, OPTIONAL, UNION, and GRAPH. An example SPARQL query is shown in Figure 2.3. ``` SELECT ?capital ?state WHERE { ?x city_name ?capital ; ?x capital_of ?y ; ?y state_name ?state ; } ``` Figure 2.3: SPARQL: Return all states and their capitals SPARQL queries are defined by basic graph patterns (BGPs), which are conjunctions of simple access patterns (SAPs). An SAP is triple whose elements are any combination of atoms or variables, where variables are prefixed by a “?”. Atoms in SAPs are called bound patterns, and variables are called unbound patterns. An SAP selects triples that match all of its bound and unbound patterns. A few examples: The SAP (John, friendof, ?x) would match any triple with both subject “John” and predicate “friendof”. It would thus match (John, friendof, Mark), but not (John, parentof, Tim). A BGP can have more than one SAP. For example, consider the BGP: (John, friendof, ?x) ∧ (?x, friendof, Tim). The left SAP matches any triple with subject “John” and with predicate “friendof”; the right SAP matches any triple with predicate “friendof” and with object “Tim”. To compute the result set of $?x$, atoms that appear in both the object position of the triples matched by the left SAP and the subject position of the triples matched by the right SAP are selected. If a query for the above BGP were run on the dataset consisting of (John, friendof, Mark), (John, friendof, Alex), (Mark, friendof, Tim), the result set of $?x$ would be $\{ \text{Mark} \}$. Semantically, the query asks, “Which friends of John are also friends of Tim?” The query requires the use of a join on the object position of the left SAP and subject position of the right SAP. The major differences between SPARQL and SQL are: 1) SPARQL allows the use of variables in SAPs that match any atom in that position. The SAP ($?a$, $?b$, $?c$) by itself would match all the triples in the dataset. 2) There is no explicit equi-join operator in SPARQL, so the query engine has to deduce what needs to be joined. Consider the example query in Figure 2.3. There are four distinct variables in the query, and two joins are necessary: Subject–Subject on $?x$, and Object–Subject on $?y$. In conclusion, RDF is a flexible format that makes it well suited to the Semantic Web, and SPARQL provides a natural way to query RDF datasets. Chapter 3 Indexing Techniques Real-world SPARQL queries usually consist of multiple SAPs, and most SAPs require lookup of atoms. Thus, before querying is performed, RDF data should first be indexed so that the lookup of atoms for SPARQL queries can scale to larger datasets. Most RDF datasets are designed to be read, so fast lookup is more important in the long run than fast index creation. 3.1 B+ Trees Much like relational databases, indexing schemes for RDF generally rely on the B+ tree as the data structure of choice. The string representation of an atom can be used as the key in the tree, and the triples that contain the atom as the values. A B+ tree is a balanced tree whose depth and width can be adjusted by setting parameters that determine the level of branching. A deep tree allows faster comparisons of keys at any level, but requires more levels to be traversed on average. A shallow and wide tree requires more comparisons at every level and more space per block, but there are fewer levels on average and thus fewer I/O operations needed. The parameters should be set to best match the filesystem and hardware. Because datasets can potentially be gigabytes in size, it is often necessary for the index itself to reside on disk. A B+ tree allows an index to be stored on disk by using block-oriented storage, meaning that each individual node, along with its pointers and/or data, are stored in blocks which can be accessed with I/O operations. This way, data can be accessed through a few I/O operations to go from the root of the tree to a leaf node with data. Because I/O operations are orders of magnitude slower than main memory accesses, the classical measure of query performance is a function of I/O operations [2]. For the experiments performed in this thesis, all nodes and data are stored in main memory, and I/O accesses are simulated by recording an I/O access whenever a node in the B+ tree is visited. This way, good performance can be achieved when conducting experiments while still measuring the correct I/O cost. 3.2 RDF Indexing Schemes The current state of the art indexing schemes for RDF are MAP and HexTree [2]. TripleT is an indexing scheme developed by Fletcher in Beck in 2008. While they all use B+ trees, they differ in how many trees are used and what the leaf nodes of the trees contain. 3.2.1 MAP and HexTree MAP is an indexing scheme that utilizes 6 indices, one for each possible ordering of subject, predicate and object. For example, to look up (John, friendof, ?x), one could either look in the SPO index for “John#friendof#?x”, or the OPS index for “?x#friendof#John”, etc. HexTree is a similar scheme that also utilizes 6 indices, one for each possible pairing of atoms: SO, OS, SP, PS, OP, PO. For a query like (John, friendof, ?x), (?x, friendof, Tim), the SP index is queried to obtain the result set for the first SAP, and the PO index is queried for the second SAP. The two result sets are then joined to get the final result. The disadvantages of these two schemes is the loss of data locality that results from having multiple B+ trees, and thus queries may incur extra I/O costs of having to look in multiple indices [2]. 3.2.2 TripleT TripleT (diagrammed in Figure 3.1) is an indexing scheme that uses only one B+ tree whose leaf nodes represent atoms. Each leaf node holds a payload that is further split into three buckets: subject, predicate, and object. For example, if the triple (Shakespeare, dc:author, Romeo and Juliet) were stored into a TripleT index, there would be three leaf nodes in the B+ tree, “Shakespeare”, “dc:author”, and “Romeo and Juliet”, each of which have a payload. The triple would exist in the subject bucket of the “Shakespeare” payload, in the predicate bucket of the “dc:author” payload, and finally in the object bucket of the “Romeo and Juliet” payload. There is duplication in this scheme, as each triple appears in three different buckets. However, the triple can be accessed from any of its atoms through one index lookup. Additionally, only one B+ tree is necessary to store all of the data. Because TripleT is designed to be stored on disk, the payloads also have to be stored on disk. Thus, the buckets themselves may be split up into separate blocks whose size should be set to fit the filesystem. For TripleT, a traversal of all triples in a bucket will incur an I/O cost equal to the total number of blocks needed to store that bucket. This factor become an important when performing experiments using TripleT. Figure 3.1: TripleT diagram This thesis builds on an implementation of TripleT written in Python that stores the index in main memory and simulates I/O accesses by keeping track of I/O operations needed if the index were stored on disk. All I/O measurements use this simulated metric. A “blockenize” function is used to calculate the number of blocks necessary to store any particular payload bucket. Chapter 4 Join Algorithms One of the most important and useful database operations is the join operation. In relational databases, this means that multiple tables are joined together based on some criteria, most often equality, resulting in a subset of the cartesian product of the participating tables. In SPARQL, a join occurs when the same variable exists in different SAPs. In the example of (John, friendof, ?x), (?x, friendof, Tim), ?x exists in both SAPs and represents an equi-join. Because these joins frequently occur in SPARQL queries and are costly when processed naively, optimizing them can yield large performance benefits. Mishra’s survey paper on relational joins [4] discusses three major algorithms used for equi-joins that look for equality between elements in two different tables: the nested-loop join, hash join, and sort-merge join. These joins can also be used for SPARQL queries. Additionally, there exists the index nested-loop join that will be introduced and used in later chapters. 4.1 Nested-loop join The nested-loop join iterates one relation over another relation in a nested loop fashion. The I/O cost is \( \#blocks(R1) + |R1| \times \#blocks(R2) \), where \( R1 \) and \( R2 \) are the joining relations. This join is easy to implement and does not require sorted relations to work. However, the complexity makes it unattractive for joining relations with many triples. Additionally, because triples to be joined are all contained in physical blocks, the nested-loop join could exhibit very bad behavior of continually looking at blocks over again in the case that the inner loop spans multiple blocks. A simple modification that processes all of \( R2 \) for each block of \( R1 \) may improve the I/O cost to \( \#blocks(R1) + \#blocks(R1) \times \#blocks(R2) \). This variant is also known as the block nested-loop join. 4.2 Hash join The hash join requires the creation of a hash table for the triples of one relation, with the key being the atom and the values being the triples containing that atom. Looking up an atom in the hash table can be achieved in amor- tized O(1) time. To minimize collisions, it is better to hash the triples of the smaller list. Since all triples in both relations have to be visited, it does not help to hash the triples of the larger relation. Once the hash table has been built for the smaller relation, the triples of the larger relation are iterated and use the hash table to see if they satisfy the join condition. Each time two triples satisfy the condition, they are both added to the result set. The I/O cost for the hash join is \( \#blocks(R1) + \#blocks(R2) \), where \( R1 \) and \( R2 \) are the joining relations. All blocks of both relations have to be visited, so the hash join has a very fixed I/O performance profile. ### 4.3 Sort-Merge join The sort-merge join takes advantage of the merge operation, which can be carried out on two sorted relations with \( \#blocks(R1) + \#blocks(R2) \) I/O operations, where \( R1 \) and \( R2 \) are the joining relations. The requirement for being sorted is extremely important since otherwise it would take additional CPU time and I/Os to sort the relations. The structure of TripleT allows the triples of each payload bucket to be sorted on one position: subject, predicate, or object. Statistics could help determine which position would benefit most for being sorted. If there are many joins on the subject position, then it is better to have the buckets sorted on the subject position so that a merge join can be used without additional sorting. If it is found that sort-merge is much faster than the alternatives, then extra buckets could potentially be created so that triples are sorted on every position, thus trading index size for join performance. Another interesting aspect of the sort-merge join is that unlike the nested-loop and hash joins, not all blocks in each bucket have to always be visited. If the merge operation gets to the end of one relation, it can then stop even if it does not visit the rest of the other relation. Thus, while all the blocks of one relation always have to be accessed, it is possible to skip blocks of the other relation. ### 4.4 Measuring Join Performance The main performance metric for join algorithms is the number of I/O operations required, since these are almost always the bottleneck and are also platform-independent. An I/O operation is performed when accessing any node in the B+ tree. Because the B+ tree is balanced and all payloads are at the leaf nodes, the I/O cost for looking up any atom is always constant and equal to the height of the B+ tree. I/O operations are also necessary when traversing payload buckets that may be stored across multiple blocks to fit all the triples. Thus, common predicate atoms often take many blocks to store, while most subject and object atoms are not too common and only require one block to store. This statistic is considered in later chapters and tabulated in Table 6.2 for the three datasets. As discussed previously, the join algorithms usually have to visit every block of the relations joined on at least once. The exception is the sort-merge join, which can in some cases finish traversing one relation and thus stop traversing the other relation. If the other relation had unvisited blocks, then it is possible to save some I/O operations. 4.4.1 Join CPU Performance on Synthetic Data Even though I/O performance is usually the better metric for joins, it would be interesting to see the pure algorithmic performance of the nested-loop, hash, and sort-merge joins, and also to see if CPU performance could potentially become a bottleneck. Two random sets of triples were created, each triple populated with a certain number of unique atoms. The atoms were scattered across all positions, which would not be the case for real-world data; for example, atoms found as predicates rarely also exist as subjects or objects. Sorting time was included for the sort-merge join experiments, as it cannot be guaranteed that the data will always be pre-sorted. The three kinds of joins described in this paper were implemented in Python. All experiments were run on a MacBook Pro 2.2 GHz with 4GB RAM and a standard build of Python 2.6.2. The timings were taken using Python’s `timeit` module which is designed to profile code. $$|\text{Unique Atoms}|, \text{Number of Triples}, \text{Nested-loop}, \text{Hash}, \text{Sort-merge}$$ <table> <thead> <tr> <th># Unique Atoms</th> <th>Number of Triples</th> <th>Nested-loop</th> <th>Hash</th> <th>Sort-merge</th> </tr> </thead> <tbody> <tr> <td>50</td> <td>10000</td> <td>39.974s</td> <td>0.822s</td> <td>3.187s</td> </tr> <tr> <td>50</td> <td>100000</td> <td>2201.645s</td> <td>6.035s</td> <td>621.417s</td> </tr> <tr> <td>500</td> <td>10000</td> <td>38.745s</td> <td>0.164s</td> <td>2.942s</td> </tr> <tr> <td>500</td> <td>100000</td> <td>2711.327s</td> <td>79.355s</td> <td>638.210s</td> </tr> </tbody> </table> Table 4.1: Join CPU performance The nested-loop join exhibits expected super-linear run-time behavior, as increasing the size of the data by a factor of 10 increased run-time by a factor of over 35. The hash join performed better than sort-merge join but did not scale as well to larger synthetic datasets. This experiment was also performed to ensure that the joins were correctly implemented as the result sets of the different joins were compared at the end and were found to be the same. 4.4.2 Join I/O and CPU Performance on Datasets I/O performance is a combination of bucket traversals and index lookups. The nature of B+ trees makes all data appear at leaf nodes, so the I/O cost of index lookups for all atoms is the same. However, each atom will have a different payload with three buckets of varying size, resulting in different numbers of blocks that have to be traversed. This is how I/O performance can differ when joining on buckets of different atoms. To compare the I/O performances of the three different joins, queries of the form of \((a, b, \text{?}v) \land (\text{?}v, c, d)\) were performed on the datasets, and I/O and CPU performance were measured. Here, “a”, “b”, “c”, and “d” represent arbitrarily picked but fixed atoms. <table> <thead> <tr> <th>Dataset</th> <th>Nested-loop Join</th> <th>Hash Join</th> <th>Sort-merge Join</th> </tr> </thead> <tbody> <tr> <td></td> <td>Avg I/O</td> <td>Avg CPU</td> <td>Avg I/O</td> </tr> <tr> <td>DBpedia</td> <td>2.0221</td> <td>1.832e-5</td> <td>2.0002</td> </tr> <tr> <td>Uniprot</td> <td>2.0365</td> <td>2.011e-5</td> <td>2.0004</td> </tr> <tr> <td>SP²Bench</td> <td>2.0503</td> <td>1.163e-5</td> <td>2.0503</td> </tr> </tbody> </table> Table 4.2: Join CPU and I/O performance (CPU measured in seconds) The CPU times were all fairly similar, because the buckets in the real datasets contained fewer triples than the buckets used in the synthetic benchmark. As predicted, I/O performance of the nested-loop join was the worst as it would often have to visit blocks that had been visited before. No I/O difference was found between the hash join and sort-merge join, so the special case where the sort-merge join could use less I/Os never occurred. In conclusion, the hash join seems to be the equi-join of choice for our purposes as it does not require relations to be sorted while still providing similar CPU and I/O performance compared to the sort-merge join. Chapter 5 Query Optimization In relational databases, when multiple join conditions exist in a query, the choice of which condition to check first may significantly affect the performance of the overall query process. Similar decisions and trade-offs exist for SPARQL queries, where each variable that is shared by a pair of SAPs represents one join condition. 5.1 Join Ordering Assuming two equi-join conditions, it would be more efficient to compute the condition that produces the smaller result set first. This would minimize the number of checks necessary to ensure the second join condition. This may be seen in the following simple example. Consider the BGP: (?x, a, ?y) \land (?y, b, ?z) \land (?x, c, ?z). To satisfy the query, the initial join can be performed between any two SAPs. Assume 1000 atoms satisfy the join on ?x, 100 atoms satisfy the join on ?y, and 10 atoms satisfy the join on ?z. If the left SAP and right SAP were joined on ?x first, the resulting relation with 1000 atoms would then have to be joined with the middle SAP. If the middle SAP and right SAP were joined on ?z first, the resulting relation would only have 10 atoms to join with the left SAP. The final end result for both join orderings would be the same, so performing the join that produces the smaller result set first is usually best. 5.2 Processing SPARQL Queries with TripleT One important difference between SPARQL joins using TripleT and traditional relational joins is that TripleT provides an index for all atoms in the graph. Thus, any SAP with an atom can be computed efficiently through the index. To get a sense of what processing SPARQL queries in TripleT entails and what kind of query paths are available, an example query will be analyzed. We first introduce some notations and definitions: Given a dataset $G$, let $T_G$ denote the TripleT index associated with $G$, and let $S(G)$, $P(G)$, $O(G)$ denote the set of subject, predicate, and object atoms in $G$ respectively. Given an atom “a”, $S_a(G)$ represents the subject bucket of the “a” payload in $T_G$. Similarly, $P_a(G)$ and $O_a(G)$ correspond to the predicate and object buckets of the “a” payload, respectively. Consider the BGP: $(a, b, ?v) \land (?v, c, d)$, where “a”, “b”, “c” and “d” are distinct atoms. There are many practical applications for this query form, such as, “What journal citations does resource ‘Q2P320’ cite?” This query would match the Uniprot triples shown in Figure 5.1. @prefix uniprot: <purl.uniprot.org/>. @prefix w3c: <www.w3.org/1999/02/22-rdf-syntax-ns>. uniprot:uniprot/Q2P320 uniprot:core/citation uniprot.rdf#_5C0A . uniprot.rdf#_5C0A, w3c:type, uniprot:core/Journal_Citation . Figure 5.1: Triples matching $(a, b, ?v) \land (?v, c, d)$ This query is interesting in there are two ways to obtain its answers. One way, shown in Figure 5.2, is to find the triples that match the atoms of the left SAP, the triples that match the atoms of the right SAP, and then perform a join on $?v$. To get the triples that match the left SAP, we could look at $T_G$ to obtain $S_a(G)$, and within those triples, select those that have predicate “b”. The resulting set would be the triples that match the left SAP. We might first compute $P_b(G)$ instead, but as Table 2.1 shows, there are typically more unique subject than predicate atoms. Hence, the selectivity in the subject position is higher, and a smaller set of triples is likely to be obtained from looking up a subject atom than a predicate atom. 1) Let $Left = \sigma_{\text{pred}=b}(S_a(G))$ 2) Let $Right = \sigma_{\text{pred}=c}(O_d(G))$ 3) return join(Object–Subject, $Left$, $Right$) Note: In step 3, “join” can be any of the three algorithms discussed previously, and the first parameter indicates the join condition to be checked in $Left$ and $Right$. Figure 5.2: Joining Left and Right SAPs Another strategy, shown in Figure 5.3, is to find all the triples that match one SAP and then do an index lookup for each unique atom in the variable position. This approach is also known as the index nested-loop join. If the left SAP were chosen, then the unique atoms that appear as objects of the triples satisfying the left SAP are determined. For each of these atoms, an index lookup is done and the subject buckets are concatenated to create an intermediate set of triples. Finally, a join is performed between the intermediate set of triples and the set of triples that matched the left SAP. This approach is usually not optimal because more than 2 index lookups are necessary, but might be reasonable if the atoms in one SAP had large buckets that would incur high I/O cost to scan through. 1) Let $Left = \sigma_{pred=b}(S_a(G))$ 2) Let $unique_objects = \{t[obj] \mid t \in Left\}$ 3) Let $Right = [ ]$ 4) for $obj$ in $unique_objects$: $Right.append(S_{obj}(G))$ 5) return $join(Object$–$Subject, Left, Right)$ Figure 5.3: Index lookup for each unique atom in the variable position 5.3 Discussion From the above discussion, the following basic query processing strategies are clear: 1) When processing an SAP with two variables and one atom, use the index to retrieve the bucket associated with the atom. 2) When processing an SAP with more than one atom, use the index to retrieve the bucket of the most selective atom. This suggests maintaining a count of the number of occurrences for each atom, which may be unfeasible for a large dataset. A reasonable approximation, on the other hand, is to maintain the number of unique atoms in the subject, predicate and object positions. The higher the number, the greater the likelihood of that position having a high selectivity. Even when more than one join condition exists, observation #1 implies that efficient processing is achievable as long as at least one atom exists for each SAP. The most challenging queries are those that contain SAPs with variables in each position. Indeed, such queries may require multiple index traversals that cannot be determined apriori and may give rise to different join ordering. Investigating these types of queries is the focus of the next chapter. Consider the BGP with one SAP that contains all variables: \((a, ?y, ?x) \land (?x, ?y, ?z)\). The all-variable SAP would match all triples in the dataset by itself, but the restrictions placed by the first SAP through the “a” atom will often eliminate a large number of the triples. Assuming we first retrieve \(S_a(G)\) for the left SAP, two reasonable paths to the answer set may be taken: 1) For each atom \(b\) in the object position of \(S_a(G)\), retrieve \(S_b(G)\) and join with \(S_a(G)\) on the variable \(?y\). Concatenate the results of the joins. 2) For each atom \(b\) in the predicate position of \(S_a(G)\), retrieve \(P_b(G)\) and join with \(S_a(G)\) on the variable \(?x\). Concatenate the results of the joins. Formally, these can be expressed as: \[ S_a(G) \bowtie \bigcup \{S_b(G) \mid b \in \pi_{\text{obj}}(S_a(G))\} \tag{6.1} \] \[ S_a(G) \triangleright \bigcup \{ P_b(G) \mid b \in \pi_{\text{pred}}(S_a(G)) \} \] (6.2) To choose correctly between options #1 and #2 requires knowing: (a) the number of unique atoms in \( \pi_{\text{obj}}(S_a(G)) \) and \( \pi_{\text{pred}}(S_a(G)) \), and (b) the size of the bucket \( S_b(G) \) or \( P_b(G) \) for each \( b \) in \( \pi_{\text{obj}}(S_a(G)) \) and \( \pi_{\text{pred}}(S_a(G)) \). The first contributes to the number of I/Os required to find the relevant buckets for the all-variable SAP, and the second contributes to the number of I/Os required to join these buckets with \( S_a(G) \). The number of I/Os necessary for a join path can be modeled by Equation 6.3, where \( \alpha \) is either \( S_{t[\text{obj}]}(G) \) or \( P_{t[\text{pred}]}(G) \) for our example. \[ \sum_{t \in S_a(G)} (\text{height}(T_G) + \#\text{blocks}(\alpha)) \] (6.3) The actual calculation of Equation 6.3 is difficult, however, as the amount of information that needs to be maintained for (a) and (b) can be very large. Intuitively, an easy to maintain measure that will give a fairly accurate indication of the preferred join path is the selectivity values for pairs of positions, e.g. Subject–Object. This is a function of the number of atoms that can possibly participate in any pair of positions. The higher the number, Figure 6.1: Join diagram the lower its selectivity. For instance, Table 2.2 tells us that a Subject–Object join is less selective than a Predicate–Object join since many more atoms appear in both positions in the first pair than in the second pair. We do not consider this measure in our study since it does not provide particularly useful information for joins on the same position. Instead, we propose a different approximation based on the following: (a’) The average number of unique subject, predicate, and object atoms per subject, predicate and object bucket respectively. (b’) The average number of blocks per subject, predicate and object bucket. These two statistics for the three datasets we use are shown in Tables 6.1 and 6.2 respectively. <table> <thead> <tr> <th>Dataset</th> <th>Subject Bucket</th> <th>Predicate Bucket</th> <th>Object Bucket</th> </tr> </thead> <tbody> <tr> <td></td> <td>Uniq</td> <td>Uniq</td> <td>Uniq</td> </tr> <tr> <td></td> <td>Pred</td> <td>Obj</td> <td>Subj</td> </tr> <tr> <td></td> <td>Uniq</td> <td>Obj</td> <td>Subj</td> </tr> <tr> <td></td> <td>Uniq</td> <td>Pred</td> <td>Subj</td> </tr> <tr> <td>DBpedia</td> <td>4.72</td> <td>5.77</td> <td>72.39</td> </tr> <tr> <td></td> <td>44.43</td> <td>2.79</td> <td>1.40</td> </tr> <tr> <td>Uniprot</td> <td>1.47</td> <td>1.69</td> <td>11003</td> </tr> <tr> <td></td> <td>3859</td> <td>3.39</td> <td>1.03</td> </tr> <tr> <td>SP²Bench</td> <td>4.99</td> <td>5.14</td> <td>2585</td> </tr> <tr> <td></td> <td>1361</td> <td>1.99</td> <td>1.01</td> </tr> </tbody> </table> That (a’) and (b’) approximate (a) and (b), respectively, is clear. Using these statistics, we can estimate the number of I/Os necessary for both options #1 and #2, and we obtain new formulae for estimating I/Os for options Table 6.2: Average Subject, Predicate, Object bucket sizes <table> <thead> <tr> <th>Dataset</th> <th>Avg #blocks(S(G))</th> <th>Avg #blocks(P(G))</th> <th>Avg #blocks(S(G))</th> </tr> </thead> <tbody> <tr> <td>DBpedia</td> <td>1.03</td> <td>2.94</td> <td>1.04</td> </tr> <tr> <td>Uniprot</td> <td>1.00</td> <td>159.35</td> <td>1.02</td> </tr> <tr> <td>SP2Bench</td> <td>1.00</td> <td>66.90</td> <td>1.02</td> </tr> </tbody> </table> #1 and #2 in Equations 6.4 and 6.5 respectively. **Notation:** Let $A, B \in \{\text{subj}, \text{pred}, \text{obj}\}$. We denote $\gamma^R_{Aa}(G)$ as the average number of unique $B$-atoms for each $A$-atom in $G$, and $\beta_A(G)$ as the average number of blocks for $A$ in $G$. For instance, $\gamma_{\text{sub}ja}^{\text{obj}}(G)$ is the average number of unique objects for each subject atom “a”. $$\gamma_{\text{sub}ja}^{\text{obj}}(G) \times \beta_{\text{sub}j}(G)$$ \hspace{1cm} (6.4) $$\gamma_{\text{sub}ja}^{\text{pred}}(G) \times \beta_{\text{pred}}(G)$$ \hspace{1cm} (6.5) The hypothesis is that by using the two statistics described above, we can predict the join path that uses less I/Os. The following six queries will be used in our experiments to validate this hypothesis. Note that the underscores represent a variable which does not participate in the join. Q1. $(a, ?y, ?x) \land (?x, ?y, _)$ Q2. $(a, ?y, ?x) \land (\_, ?y, ?x)$ Q3. (?x, a, ?y) ∧ (?x, ⊥, ?y) Q4. (?x, a, ?y) ∧ (?y, ⊥, ?x) Q5. (?x, ?y, a) ∧ (?x, ?y, ⊥) Q6. (?x, ?y, a) ∧ (⊥, ?y, ?x) For each query, the predicted number of I/Os required to go through ?x and I/Os required to go through ?y were calculated using our model. The predicted difference was also calculated as (predicted ?x IOs – predicted ?y IOs). 500 queries were run for Q1, Q2, Q5 and Q6, and 100 queries were run for Q3 and Q4 as the latter had less than 100 unique joins for that type of query. The values of the actual difference between the amount of I/Os going through ?x requires and the amount of I/Os going through ?y requires was recorded along with the number of “victories” for each variable. ### 6.1 DBpedia Results <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Q1</td> <td>29.72</td> <td>69.38</td> <td>-39.66</td> <td>-1448</td> <td>497</td> <td>3</td> <td>0</td> </tr> <tr> <td>Q2</td> <td>30.00</td> <td>69.38</td> <td>-39.38</td> <td>-1218</td> <td>461</td> <td>30</td> <td>9</td> </tr> <tr> <td>Q3</td> <td>372.81</td> <td>231.04</td> <td>141.77</td> <td>-169</td> <td>48</td> <td>16</td> <td>36</td> </tr> <tr> <td>Q4</td> <td>376.43</td> <td>228.81</td> <td>147.62</td> <td>87</td> <td>18</td> <td>54</td> <td>28</td> </tr> <tr> <td>Q5</td> <td>14.37</td> <td>69.38</td> <td>-55.01</td> <td>-567</td> <td>447</td> <td>18</td> <td>35</td> </tr> <tr> <td>Q6</td> <td>14.51</td> <td>69.38</td> <td>-54.87</td> <td>-500</td> <td>479</td> <td>12</td> <td>9</td> </tr> </tbody> </table> Table 6.3: I/O results for DBpedia The results for DBpedia generally fit the expected outcomes, except for Q3 where ?y was predicted to do better but ?x did better instead. What is interesting to note is that the actual difference values were usually larger than what was calculated from our model based on averages. Also, the margins of victories in Q3 and Q4 were very close, which suggests that the model might not be good for predicting I/O performance for these queries on DBpedia without further tweaking. ### 6.2 Uniprot Results <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Q1</td> <td>8.45</td> <td>1168.65</td> <td>-1160</td> <td>-1699</td> <td>500</td> <td>0</td> <td>0</td> </tr> <tr> <td>Q2</td> <td>8.62</td> <td>1168.65</td> <td>-1160</td> <td>-1620</td> <td>497</td> <td>3</td> <td>0</td> </tr> <tr> <td>Q3</td> <td>55020</td> <td>19681</td> <td>35339</td> <td>62989</td> <td>22</td> <td>77</td> <td>1</td> </tr> <tr> <td>Q4</td> <td>56120</td> <td>19295</td> <td>36825</td> <td>43731</td> <td>21</td> <td>75</td> <td>4</td> </tr> <tr> <td>Q5</td> <td>16.95</td> <td>1168.65</td> <td>-1152</td> <td>-489</td> <td>496</td> <td>3</td> <td>1</td> </tr> <tr> <td>Q6</td> <td>17.29</td> <td>1168.65</td> <td>-1151</td> <td>-491</td> <td>497</td> <td>2</td> <td>1</td> </tr> </tbody> </table> Table 6.4: I/O results for Uniprot The Uniprot results correctly predicted the better join path for all queries, usually by a large margin. | Q1 | 25.70 | 1669 | -1643 | -2174 | 500 | 0 | 0 | | Q2 | 26.21 | 1669 | -1643 | -1854 | 500 | 0 | 0 | | Q3 | 12925 | 6941 | 5984 | 7960 | 4 | 80 | 16 | | Q4 | 13183 | 6805 | 6378 | 8890 | 1 | 93 | 6 | | Q5 | 9.95 | 1669 | -1659 | -380 | 499 | 1 | 0 | | Q6 | 10.15 | 1669 | -1659 | -391 | 499 | 0 | 1 | Table 6.5: I/O results for SP²Bench ### 6.3 SP²Bench Results Finally, the SP²Bench results show similar numbers to Uniprot, and the statistics correctly predicted the better path for each query by a large margin. ### 6.4 Variant Query Forms The models introduced can also be used for variant query forms that involve three SAPs. For example, consider the BGP: \[ (a, \ ?y, \ _, \ ) \land (\ _, \ ?y, \ ?x) \land (b, \ _, \ ?x) \] While the form now has three SAPs and two atoms, the join plans available and number of I/Os necessary are actually very similar to those for our previous example BGP of \[ (a, \ ?y, \ ?x) \land (?x, \ ?y, \ ?z) \] The only difference is that after joining one SAP to the middle SAP, another join has to be done with the remaining SAP. Our models should be able to use the same method to predict queries of this form. We use the following two queries for experiments: Q7. (?x, a, _) ∧ (?x, _, ?y) ∧ (_, b, ?y) — similar to Q3 Q8. (?x, a, _) ∧ (?y, _, ?x) ∧ (_, b, ?y) — similar to Q4 <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>DBPedia Q7</td> <td>142</td> <td>-316</td> <td>60</td> <td>33</td> <td>7</td> </tr> <tr> <td>DBPedia Q8</td> <td>148</td> <td>117</td> <td>35</td> <td>58</td> <td>7</td> </tr> <tr> <td>Uniprot Q7</td> <td>19681</td> <td>37046</td> <td>37</td> <td>63</td> <td>0</td> </tr> <tr> <td>Uniprot Q8</td> <td>19295</td> <td>61670</td> <td>23</td> <td>76</td> <td>1</td> </tr> <tr> <td>SP²Bench Q7</td> <td>6941</td> <td>9435</td> <td>31</td> <td>69</td> <td>0</td> </tr> <tr> <td>SP²Bench Q8</td> <td>6805</td> <td>8801</td> <td>41</td> <td>58</td> <td>1</td> </tr> </tbody> </table> Table 6.6: I/O results for Variant Query Form The results shown in Table 6.6 indicate that while the margins of victory are not as great, the models still correctly predict the best path the majority of the time. Q7 for DBpedia gives an unexpected result similar to Q3. The overall result shows that the models are not only useful for queries with only two SAPs, but also for variant query forms that have at least one all-variable SAP. 6.5 Discussion It is clear that the models accurately predict most of the queries correctly. In the case of the unexpected result for DBpedia, it is likely that the statistics for the object and subject positions in DBpedia were very similar, which would make the calculated expected I/Os similar as well. Table 6.1 shows that for DBpedia, the average number of unique objects per subject bucket was 5.77, and the average number of unique subjects per object bucket was 2.79. Additionally, the average bucket sizes for both positions were similar. The corresponding statistics for SP²Bench were 5.14 and 1.99 respectively, and the same query was correctly predicted. The limitation of our model is for queries that utilize statistics that have close values. Note that although the predicted differences and actual differences often varied greatly, the models still predicted the correct outcome. Chapter 7 Conclusion We implemented the nested-loop, hash, and sort-merge joins. Synthetic benchmarks found that the nested-loop join predictably performed much worse than the other two in terms of CPU performance. The hash join was about an order of magnitude faster than the sort-merge join when the latter had to sort the lists. Benchmarks performed on real datasets showed much smaller differences in CPU performance as there were much fewer triples considered for the joins. In terms of I/O performance, the nested-loop join was about 10% worse than the hash join and sort-merge join, which performed equally. We analyzed different kinds of SAP patterns found in SPARQL queries to better understand how to develop a query processing algorithm using Fletcher and Beck’s TripleT index. To process an SAP with at least one atom, an index lookup should be performed to obtain the bucket of the atom. in the most selective position. To handle all-variable SAPs, we introduced a model to estimate I/O, apriori, and conducted experiments to verify our model, which was found to be correct in nearly all cases. The technique is attractive in that it only requires the maintaining of a few statistics that can be calculated very efficiently during the index creation process. For future work, an implementation of TripleT that stores the index on disk instead of in-memory is necessary to obtain a more accurate picture of performance. Doing so would allow accurate timing measurements to be taken instead of having to look at CPU time and I/O accesses separately, and having to guess at the relationship between the two metrics. For query optimization, the obvious next step is to experiment with larger datasets using a variety of queries and eventually develop a query optimizer that can process more complicated SPARQL queries. For example, consider the SP$^2$Bench benchmark 5b query [8] shown in Figure 7.1. The BGP form of this query is: $$(?v, a, b) \land (?v, c, ?x) \land (?y, a, d) \land (?y, c, ?x) \land (?x, e, _)$$ It was reported that all current SPARQL engines performed poorly on this query [6]. A query with this many variables should greatly benefit from having good join ordering. /* * Return the names of all persons that occur as author * of at least one inproceeding and at least one article * (same as (Q5a)). */ SELECT DISTINCT ?person ?name WHERE { ?article rdf:type bench:Article . ?article dc:creator ?person . ?person foaf:name ?name } Figure 7.1: SP²Bench 5b query Additional future work for TripleT will be to investigate compression schemes to reduce the size of the index. Development of a working implementation that can answer basic SPARQL queries would be useful so that existing benchmarks like SP²Bench [8] could be used to compare TripleT with other implementations. Bibliography
{"Source-Url": "https://legacy-etd.library.emory.edu/file/view/pid/emory:1f2pw/etd/emory:1f2nr/li_dissertation.pdf", "len_cl100k_base": 13906, "olmocr-version": "0.1.53", "pdf-total-pages": 59, "total-fallback-pages": 0, "total-input-tokens": 105125, "total-output-tokens": 16511, "length": "2e13", "weborganizer": {"__label__adult": 0.00041866302490234375, "__label__art_design": 0.0007090568542480469, "__label__crime_law": 0.0006031990051269531, "__label__education_jobs": 0.00923919677734375, "__label__entertainment": 0.0002155303955078125, "__label__fashion_beauty": 0.000278472900390625, "__label__finance_business": 0.0010271072387695312, "__label__food_dining": 0.0004875659942626953, "__label__games": 0.0008025169372558594, "__label__hardware": 0.0009541511535644532, "__label__health": 0.0008950233459472656, "__label__history": 0.0007963180541992188, "__label__home_hobbies": 0.00020635128021240232, "__label__industrial": 0.0006089210510253906, "__label__literature": 0.0010786056518554688, "__label__politics": 0.0004401206970214844, "__label__religion": 0.0005888938903808594, "__label__science_tech": 0.387451171875, "__label__social_life": 0.0003368854522705078, "__label__software": 0.038726806640625, "__label__software_dev": 0.552734375, "__label__sports_fitness": 0.00027751922607421875, "__label__transportation": 0.00067138671875, "__label__travel": 0.0002884864807128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54429, 0.05482]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54429, 0.47834]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54429, 0.90102]], "google_gemma-3-12b-it_contains_pii": [[0, 834, false], [834, 1269, null], [1269, 1627, null], [1627, 2370, null], [2370, 2713, null], [2713, 3423, null], [3423, 3772, null], [3772, 3772, null], [3772, 3875, null], [3875, 4495, null], [4495, 5353, null], [5353, 6329, null], [6329, 7452, null], [7452, 8401, null], [8401, 9250, null], [9250, 10453, null], [10453, 11645, null], [11645, 12311, null], [12311, 13444, null], [13444, 14622, null], [14622, 15763, null], [15763, 16845, null], [16845, 18063, null], [18063, 18860, null], [18860, 20122, null], [20122, 21264, null], [21264, 22598, null], [22598, 22626, null], [22626, 22999, null], [22999, 23940, null], [23940, 25113, null], [25113, 26395, null], [26395, 27616, null], [27616, 28812, null], [28812, 30447, null], [30447, 32094, null], [32094, 32456, null], [32456, 33266, null], [33266, 34499, null], [34499, 35764, null], [35764, 37124, null], [37124, 38339, null], [38339, 38577, null], [38577, 39436, null], [39436, 40774, null], [40774, 40799, null], [40799, 42549, null], [42549, 43939, null], [43939, 45405, null], [45405, 46745, null], [46745, 47910, null], [47910, 49351, null], [49351, 50072, null], [50072, 50976, null], [50976, 52277, null], [52277, 52967, null], [52967, 53526, null], [53526, 54339, null], [54339, 54429, null]], "google_gemma-3-12b-it_is_public_document": [[0, 834, true], [834, 1269, null], [1269, 1627, null], [1627, 2370, null], [2370, 2713, null], [2713, 3423, null], [3423, 3772, null], [3772, 3772, null], [3772, 3875, null], [3875, 4495, null], [4495, 5353, null], [5353, 6329, null], [6329, 7452, null], [7452, 8401, null], [8401, 9250, null], [9250, 10453, null], [10453, 11645, null], [11645, 12311, null], [12311, 13444, null], [13444, 14622, null], [14622, 15763, null], [15763, 16845, null], [16845, 18063, null], [18063, 18860, null], [18860, 20122, null], [20122, 21264, null], [21264, 22598, null], [22598, 22626, null], [22626, 22999, null], [22999, 23940, null], [23940, 25113, null], [25113, 26395, null], [26395, 27616, null], [27616, 28812, null], [28812, 30447, null], [30447, 32094, null], [32094, 32456, null], [32456, 33266, null], [33266, 34499, null], [34499, 35764, null], [35764, 37124, null], [37124, 38339, null], [38339, 38577, null], [38577, 39436, null], [39436, 40774, null], [40774, 40799, null], [40799, 42549, null], [42549, 43939, null], [43939, 45405, null], [45405, 46745, null], [46745, 47910, null], [47910, 49351, null], [49351, 50072, null], [50072, 50976, null], [50976, 52277, null], [52277, 52967, null], [52967, 53526, null], [53526, 54339, null], [54339, 54429, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54429, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54429, null]], "pdf_page_numbers": [[0, 834, 1], [834, 1269, 2], [1269, 1627, 3], [1627, 2370, 4], [2370, 2713, 5], [2713, 3423, 6], [3423, 3772, 7], [3772, 3772, 8], [3772, 3875, 9], [3875, 4495, 10], [4495, 5353, 11], [5353, 6329, 12], [6329, 7452, 13], [7452, 8401, 14], [8401, 9250, 15], [9250, 10453, 16], [10453, 11645, 17], [11645, 12311, 18], [12311, 13444, 19], [13444, 14622, 20], [14622, 15763, 21], [15763, 16845, 22], [16845, 18063, 23], [18063, 18860, 24], [18860, 20122, 25], [20122, 21264, 26], [21264, 22598, 27], [22598, 22626, 28], [22626, 22999, 29], [22999, 23940, 30], [23940, 25113, 31], [25113, 26395, 32], [26395, 27616, 33], [27616, 28812, 34], [28812, 30447, 35], [30447, 32094, 36], [32094, 32456, 37], [32456, 33266, 38], [33266, 34499, 39], [34499, 35764, 40], [35764, 37124, 41], [37124, 38339, 42], [38339, 38577, 43], [38577, 39436, 44], [39436, 40774, 45], [40774, 40799, 46], [40799, 42549, 47], [42549, 43939, 48], [43939, 45405, 49], [45405, 46745, 50], [46745, 47910, 51], [47910, 49351, 52], [49351, 50072, 53], [50072, 50976, 54], [50976, 52277, 55], [52277, 52967, 56], [52967, 53526, 57], [53526, 54339, 58], [54339, 54429, 59]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54429, 0.15402]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6ee4587529cb801ce9e100e425bf1a8fbbdc4652
A HIGH-PERFORMANCE BROWNIAN BRIDGE FOR GPUS: LESSONS FOR BANDWIDTH BOUND APPLICATIONS JACQUES DU TOIT Abstract. We present a very flexible Brownian bridge generator together with a GPU implementation which achieves close to peak performance on an NVIDIA C2050. The performance is compared with an OpenMP implementation run on several high performance x86-64 systems. The GPU shows a performance gain of at least 10x. Full comparative results are given in Section 8: in particular, we observe that the Brownian bridge algorithm does not scale well on multicore CPUs since it is memory bandwidth bound. The evolution of the GPU algorithm is discussed. Achieving peak performance required challenging the “conventional wisdom” regarding GPU programming, in particular the importance of occupancy, the speed of shared memory and the impact of branching. Contents 1. Introduction and Software Requirements 1 2. Algorithm Design 3 3. First GPU Strategy 5 4. Second GPU Strategy 6 5. Third GPU Strategy 7 6. Fourth GPU Strategy 7 7. Fifth GPU Strategy 8 8. Summary of Results and Conclusions 9 References 11 1. Introduction and Software Requirements The Brownian bridge algorithm (see e.g. [2]) is a popular method for constructing sample paths of a Brownian motion. The procedure may be summarised as follows. Fix two times \( t_0 < T \) and let \( X = (X_t)_{t_0 \leq t \leq T} \) denote a Brownian motion on the interval \([t_0,T]\). Let \((t_i)_{1 \leq i \leq N}\) be any set of time points satisfying \( t_0 < t_1 < \ldots < t_N < T \) for some \( N \geq 1 \). Our aim is to simulate values for \( \{X_{t_i}\}_{1 \leq i \leq N} \) and \( X_T \) by using a set of standard Normal random numbers \( Z_0, Z_1, \ldots, Z_N \). We assume that the value \( X_{t_0} = x \) is always known (often \( x = 0 \)), and we always set \( X_T = x + \sqrt{T-t_0}Z_0 \). The Brownian bridge algorithm then uses interpolation to fill in the remaining values \( \{X_{t_i}\}_{1 \leq i \leq N} \). Given any two points \( X_{t_i} \) and \( X_{t_k} \) which are known, a third point \( X_{t_j} \) for \( t_i < t_j < t_k \) can be computed as \[ X_{t_j} = \frac{X_{t_i}(t_k - t_j) + X_{t_k}(t_j - t_i)}{t_k - t_i} + Z_j \sqrt{\frac{(t_k - t_j)(t_j - t_i)}{t_k - t_i}}. \] The algorithm is therefore iterative. Given the known starting value \( X_{t_0} = x \) and the final value \( X_T = x + \sqrt{T-t_0}Z_0 \), a third point \( X_{t_i} \) for any \( 1 \leq i \leq N \) can be computed. Given the three points \( X_{t_0}, X_T, X_{t_i} \), a fourth point \( X_{t_j} \) for any \( j \neq i \) can be computed by interpolating between its nearest neighbours. The process continues until all the points have been generated. If the Brownian motion is multidimensional, the algorithm can still be used with minor changes. Each \( X_{t_i} \) and \( Z_i \) becomes a vector and correlation is introduced by setting \[ X_{t_j} = \frac{X_{t_k}(t_k - t_j) + X_{t_i}(t_j - t_i)}{t_k - t_i} + C Z_i \sqrt{\frac{(t_k - t_j)(t_j - t_i)}{t_k - t_i}}. \] where \( C \) is a matrix such that \( CC' \) gives the desired covariance structure of the Brownian motion. When the Brownian bridge is used to solve stochastic differential equations, it is more appropriate to produce scaled increments of the form \((X_{t_{i+1}} - X_{t_i})/(t_{i+1} - t_i)\). It turns out that this is somewhat easier than producing the Brownian sample path points \( X_{t_i} \). We will not discuss scaled increments further, however timings for the increments generators are giving in Section 8. 1.1. Bridge Construction Orders. The Brownian bridge algorithm is not fully specified until we state which points \( X_{t_j} \) are interpolated from which points \( X_{t_i} \) and \( Z_i \). For example, with \( N = 12 \) and a set of time points \( \{t_i\}_{1 \leq i \leq 12} \) we could construct a bridge in the order \[ T \quad t_6 \quad t_3 \quad t_9 \quad t_1 \quad t_4 \quad t_7 \quad t_{11} \quad t_2 \quad t_5 \quad t_8 \quad t_{10} \quad t_{12} \] meaning that \( X_{t_{12}} \) is interpolated between \( X_{t_0} \) and \( X_T \); \( X_{t_3} \) is interpolated between \( X_{t_0} \) and \( X_{t_6} \); \( X_{t_9} \) is interpolated between \( X_{t_0} \) and \( X_T \); \( X_{t_1} \) is interpolated between \( X_{t_0} \) and \( X_{t_3} \); \( X_{t_4} \) is interpolated between \( X_{t_3} \) and \( X_{t_6} \); and so on. However we could equally construct the bridge in the order \[ T \quad t_2 \quad t_4 \quad t_3 \quad t_9 \quad t_1 \quad t_7 \quad t_{12} \quad t_5 \quad t_{10} \quad t_6 \quad t_{11} \quad t_8 \] where now \( X_{t_{12}} \) is interpolated between \( X_{t_0} \) and \( X_T \); \( X_{t_4} \) is interpolated between \( X_{t_2} \) and \( X_T \); \( X_{t_3} \) is interpolated between \( X_{t_2} \) and \( X_{t_4} \); \( X_{t_9} \) is interpolated between \( X_{t_3} \) and \( X_{t_6} \); \( X_{t_1} \) is interpolated between \( X_{t_0} \) and \( X_{t_2} \); and so on. Both construction orders are equally valid. Indeed, any permutation of the times \( \{t_i\}_{1 \leq i \leq N} \) will specify a valid bridge construction order. If \( \Theta = \{\theta_i\}_{1 \leq i \leq N} \) denotes a permutation of the set \( \{t_i\}_{1 \leq i \leq N} \), then for any \( \theta_i \in \Theta \) we will have \[ X_{\theta_i} = \frac{X_\ell(r - \theta_i) + X_r(\theta_i - \ell)}{r - \ell} + Z_i \sqrt{\frac{(r - \theta_i)(\theta_i - \ell)}{r - \ell}}. \] where \( \ell = \max\{t_0, \theta_j \mid 1 \leq j < i, \theta_j < \theta_i\} \) is the greatest “known” point smaller than \( \theta_i \) and \( r = \min\{T, \theta_j \mid 1 \leq j < i, \theta_j > \theta_i\} \) is the smallest “known” point greater than \( \theta_i \). Here we mean that a time point \( s \) is “known” if the corresponding value \( X_s \) has already been computed (and is therefore known) by the time we come to computing \( X_{\theta_i} \). We are simply ensuring that when we interpolate \( X_{\theta_i} \), we interpolate between its nearest known neighbours. For example in (4) above, we interpolate \( X_{t_{14}} \) between \( X_{t_2} \) and \( X_T \) and not between \( X_{t_0} \) and \( X_T \). 1.2. Quasi-Random Numbers. When the \( Z_i \)s are drawn from a pseudorandom generator, there is no theoretical reason to prefer one bridge construction order over another since each \( Z_i \) is independent from, and identical to, every other \( Z_i \). However the Brownian bridge algorithm is frequently used with quasi-random numbers generated from low discrepancy sequences (such as Sobol sequences), and in this case the situation is very different. We refer to [2] for a more detailed discussion about why one would use quasi-random points with a Brownian bridge algorithm, but essentially the idea is that one “covers” the space of Brownian sample paths more evenly than one would with pseudo-random points. The advantages are exactly analogous to using quasi-random points in a Monte Carlo integration of an $N+1$ dimensional function. A single $N+1$ dimensional quasi-random point $(Z_0, Z_1, \ldots, Z_N)$ is used to construct an entire sample path. The problem is that, for most quasi-random generators, the lower dimensions $(Z_0, Z_1, \ldots)$ typically display much better uniformity properties than the higher dimensions $(\ldots, Z_{N-1}, Z_N)$. The lower dimensions are therefore more “valuable” and should be used to construct the most important parts of the Brownian motion. For example, if we consider a model which is particularly sensitive to the behaviour of the Brownian motion at time $\eta$, then we would ensure that - time $\eta$ was one of the interpolation points, - $X_\eta$ was constructed using a $Z_i$ from the lower dimensions, - $X_\eta$ was interpolated between points which were themselves constructed using $Z_i$s from the lower dimensions. This idea maps quite naturally to the bridge construction orders as depicted in (3) and (4) above. If we specify a bridge construction order through a permutation $\Theta \equiv \{\theta_i\}_{1 \leq i \leq N}$ of the times $\{t_i\}_{1 \leq i \leq N}$, and we ensure that the most important time points are given by $\theta_1, \theta_2, \ldots$, then we can use $Z_0$ to construct $X_T$, use $Z_1$ to construct $X_{\theta_1}$, use $Z_2$ to construct $X_{\theta_2}$, and so on. The construction order $\Theta$ maps directly onto the dimensions of the quasi-random point so that it is clear which dimension will be used to construct each point $X_{\theta_i}$. For ease of notation we will set $\theta_0 \equiv T$ so that $Z_i$ is used to construct $X_{\theta_i}$ for each $0 \leq i \leq N$. 1.3. Memory Bandwidth Bound Algorithm. The multipliers $(t_k - t_j)/(t_k - t_i)$, $(t_j - t_i)/(t_k - t_i)$ and $\sqrt{(t_k - t_j)(t_j - t_i)/(t_k - t_i)}$ in (1) above can be precomputed: there are only $N$ of them, and they do not change once the construction order is fixed. These values can be taken as known when considering (1), they simply have to be fetched from memory. The Brownian bridge is therefore a memory bandwidth bound application: there is very little computation relative to the amount of data transferred. Our aim therefore was to achieve peak memory bandwidth (or as near to it as possible) as measured when considering only the movement of $Z_i$s from main memory and the movement of $X_{\theta_i}$s to main memory. Additional data movement (if any) would not be taken into account so that the metric would reflect the ideal world (from a user’s point of view) where any extra data traffic could somehow be made to disappear. Users were to have complete freedom to specify any bridge construction order. Helper routines would be provided so that a user wouldn’t have to manufacture a full construction order themselves. The algorithms would be implemented on traditional x86-64 architectures and on NVIDIA GPUs using CUDA. Our discussion will focus on the GPU implementation: comparative results with the x86-64 multicore implementations are given in Section 8. 2. Algorithm Design The fact that a user can specify any bridge construction order means that it is essential to abstract this away, thereby reducing all construction orders to a common format. To achieve this, a two step design was adopted. In the first (called initialisation), the user passes in the bridge construction order. This order is then processed and reduced to an execution strategy which is copied to the GPU. As long as the bridge construction order and the time points remain unchanged, the execution strategy remains valid. In the second step (called generation), the Brownian sample paths are generated from a set of input Normal random numbers (either pseudorandom or quasi-random). The generation step can be executed several times to generate successive sets of Brownian sample paths. 2.1. Local Stack. An efficient implementation of a Brownian bridge algorithm typically requires a local workspace array, which we call the local stack. Since previously computed Brownian points $X_{\theta_i}$ are used to interpolate new points, it is clear that computed points should be kept as close to the processing units as possible, preferably in hardware cache. Since the hardware cache is too small to hold all the Brownian points, at each time step the algorithm must decide which points to keep and these are stored in the local stack. The algorithm must also determine when a point in the local stack is no longer required, so that it can be replaced by a new point. The idea is that, if the local stack does not grow too large, there is a good chance it will fit in L1 or L2 cache. A key requirement of the bridge algorithm therefore is that it ensures that the local stack stays small. 2.2. Initialisation and the Execution Strategy. It turns out that it is possible to permute a given bridge construction order without changing the actual Brownian points that are computed. To illustrate, consider (3) above. This construction order is equivalent to the following construction order $$T \ t_6 \ t_9 \ t_3 \ t_{11} \ t_7 \ t_4 \ t_1 \ t_{12} \ t_{10} \ t_8 \ t_5 \ t_2$$ since $X_{t_6}$ is interpolated between $X_{t_6}$ and $X_T$; $X_{t_9}$ is interpolated between $X_{t_6}$ and $X_T$; $X_{t_3}$ is interpolated between $X_{t_6}$ and $X_{t_6}$; $X_{t_{11}}$ is interpolated between $X_{t_9}$ and $X_T$; and so on. The output from this construction order will be identical to the output from (3) as long as the same $Z_i$s are used to create each bridge point. Therefore in (6) we would use $Z_0$ to generate $X_T$; $Z_1$ to generate $X_{t_6}$; $Z_3$ to generate $X_{t_6}$; $Z_2$ to generate $X_{t_6}$; $Z_7$ to generate $X_{t_{11}}$; and so on. It is easy for an arbitrary bridge construction order to use a lot of local stack. The task of the initialisation step is therefore to take the user’s construction order and to find an equivalent construction order (along with a permutation of the $Z_i$s) which uses a minimal amount of stack. This procedure is rather technical and won’t be discussed further, but the output is an execution strategy which is passed to the generate step. The execution strategy consists of a new bridge construction order and a permutation of the $Z_i$s. 2.3. Conventional Wisdom: Occupancy, Shared Memory and Branching. The conventional wisdom regarding GPU programming is that occupancy is important for memory bandwidth bound applications, that shared memory is fast, and that branching should be avoided. By branching here we do not mean warp divergence: we mean traditional serial branching. On a GPU this is equivalent to an if-then-else statement where all the threads in a warp take the same branch. Avoiding branches, especially in inner loops, is a standard optimisation strategy for CPU code. To increase occupancy, a kernel should use as few registers as possible and the kernel should be launched with as many threads per streaming multiprocessor (SM) as possible. The idea is that the memory requests from all these threads will saturate the memory bus, and while data is being fetched for some threads (and they can therefore do nothing while waiting for the data to arrive), other threads whose data has arrived can continue executing. In order to interpolate any Brownian point $X_{\theta_i}$, the following steps have to be carried out: (a) Determine the left and right neighbours of $X_{\theta_i}$ and find their locations in the local stack. These are the points $X_\ell$ and $X_r$ in (5) above between which $X_{\theta_i}$ is interpolated. (b) Read the left and right neighbours off the local stack. (c) Determine which random point $Z_i$ to use. (d) Read the $Z_i$ point from global memory. (e) Compute $X_{\theta_i}$ using (5). (f) Determine where to store $X_{\theta_i}$ in main memory. Clearly points should be stored in the correct order, namely $X_{t_1}, X_{t_2}, \ldots, X_T$. (g) Store $X_{\theta_i}$ to main memory. (h) Determine where to store $X_{\theta_i}$ in the local stack. (i) Store $X_{\theta_i}$ in the local stack. Steps (a), (c), (f) and (h) are all done during the initialisation step and together constitute the execution strategy. Physically the execution strategy consists of an array of integers which are copied to the GPU and read off by each CUDA thread as it generates a sample path. At each time step a thread would need to read 5 integers in order to compute a new Brownian point: indexes of left and right neighbours in the local stack; index of point $Z_i$; storage index of $X_{\theta_i}$ in global memory; storage index of $X_{\theta_i}$ in the local stack. Given this information, the generate step then simply consists of (a) Read the left and right neighbours off the local stack. (b) Read the $Z_i$ point from global memory. (c) Compute $X_{\theta_i}$ using (5). (d) Store $X_{\theta_i}$ to main memory. (e) Store $X_{\theta_i}$ in the local stack. This is a branchless process and therefore should be very efficient. In addition, the local stack can be stored in shared memory which is very fast. There is no warp divergence (each thread generates a separate Brownian sample path – threads in a warp are doing the same thing at the same time), all accesses to global memory are aligned and fully coalesced, and there are no bank conflicts when accessing shared memory. All in all, the algorithm seems ideally suited the GPU and should perform very well. 2.4. Test System and Test Problem. The test system consists of an Intel Core i7 860 running at 2.8GHz with 8GB RAM and a Tesla C2050 with Error Checking and Correction (ECC) on. The system runs 64 bit Linux with CUDA Toolkit v4.0. The basic test problem consists of generating 1,439,744 one dimensional Brownian sample paths each with 64 time steps, and the construction order is a standard bisection order such as that given by (3). All computations are carried out in single precision: it is harder to saturate the memory bus with 4 byte transfers than with 8 byte transfers. In total then the test problem consists of moving 351MB of data (the $Z_i$ s) onto the compute cores, and then moving 351MB of data (the $X_{t_i}$ s) back out to global memory. All performance measurements were obtained through NVIDIA’s Compute Visual Profiler with all performance counters enabled. If an algorithm introduced local memory bus traffic (values in L1 cache spilling to L2 cache or to global memory), this was noted. Only the generate step was timed - the initialisation step was ignored. The peak memory bandwidth of the C2050 is given as 144GB/s. However this is the figure with ECC turned off. When ECC is turned on, the peak bandwidth of the card is reduced to slightly less than 120GB/s (see [1]). The target is therefore to achieve as close to 120GB/s overall transfer rate as possible. All performance figures quoted below are for the optimal launch configuration that was found, as measured by kernel execution time. Occupancy figures are for that launch configuration. 3. First GPU Strategy Recall that the execution strategy consists of an array of integers which contain indexes into the local stack, the array of $Z_i$ s and the bridge storage (output) array. Since these integers are fixed for the duration of the generate step, and each thread in a warp will access the same element at the same time (broadcast access), it seems obvious to store them in constant memory. The first GPU implementation was therefore as follows. To conserve space, the execution strategy used 8 bit unsigned integers to store the read and write indexes into the local stack and 16 bit unsigned integers to store the $Z_i$ read index and the $X_{\theta_i}$ write index. The total size of the execution strategy for each bridge point was therefore $2 \times 16$ bit + $3 \times 8$ bit for a total cost of 7 bytes. These values were read from constant memory through 16 bit and 8 bit broadcasts. The stack was held in local memory and L1 caching was turned off, so that the stack physically resided in the L1 cache. Since the same physical hardware is used for both L1 cache and shared memory, this should be equivalent to placing the stack in shared memory. 3.1. Parallelisation and Multidimensional Brownian Motions. When the Brownian motion is one dimensional, it is clear that each CUDA thread can create a Brownian sample path independently of every other CUDA thread. In this case one would have each thread create several Brownian paths, so that each thread block does a reasonable amount of work. However if the Brownian motion is multidimensional we must compute the matrix-vector product $CZ_i$. The way this was implemented was to put $C$ in constant memory and have threads read the values of each $Z_i$ vector into shared memory. Threads would then synchronise and each thread would compute its row of the matrix-vector product $CZ_i$. Accesses to $C$ followed a broadcast access pattern. The bridge point $X_{\theta_i}$ could then be computed as before, independently of all other CUDA threads. This approach meant that two `syncthreads()` calls had to be issued at each time step. 3.2. Performance. The kernel used 20 registers, had an occupancy of 100% and a performance of 29GB/s. 4. Second GPU Strategy Frequently heard advice for bandwidth bound applications is to use vector data types so that each thread transfers and processes more data. We adjusted the algorithm from Section 3 to allow each thread to read the $Z_i$ s and write the $X_{\theta_i}$ s as either `float2`, `float3` or `float4`. Now each thread would process 2, 3 or 4 Brownian sample paths at once, so that the amount of local stack required would increase by a factor of 2, 3 or 4. Apart from that, the algorithm was unchanged: the execution strategy was read from constant memory, there were two synchronisations per time step, and the matrix $C$ was held in constant memory. 4.1. Performance. The kernels used between 20 and 34 registers, had occupancy ranging between 100% and 54%, and generally gave poor performance. Although global memory traffic increased, this was due to the much larger local stacks which spilled to L2 cache. The stacks were held in local memory, and thus resided in L1 cache. To increase occupancy, each thread block was launched with many threads. However as there were no restrictions on registers or shared memory which would force the runtime to only place one or two blocks per SM, the runtime placed several blocks on each SM, exhausting the L1 cache and causing the stacks to spill to L2 and global memory. Running the kernel with fewer blocks and fewer threads alleviated the spilling, but lead to worse performance since there were now fewer transactions to global memory for $Z_i$ s and $X_{\theta_i}$ s. In all, each kernel ran slower than the kernel from Section 3, meaning the effective performance was less than 29GB/s. 5. THIRD GPU STRATEGY Since the normal running of the algorithm could not generate enough memory instructions to saturate the memory bus, we introduced *explicit data prefetching*. This idea exploits the fact that the GPU compute cores do not stall on a data load instruction: they only stall once the register used for the load (i.e. the register into which the data from global memory is loaded) becomes the argument of an operation. Therefore if one thread issues several load operations on several different registers, and a programmer takes care not to use those registers until later in the program, then the loads will happen asynchronously while that thread carries on executing. We changed the algorithm from Section 3 so that each thread prefetched $P$ Normal random numbers before starting a path calculation. Then when each point $X_{\theta_i}$ in the path is calculated, the corresponding Normal number is used and a new Normal random number corresponding to $X_{\theta_i+P}$ is loaded to the same register. This way, $P$ Normal fetches are always in flight. With this strategy it is also necessary to unroll the main compute loop $P$ times to ensure that registers are handled correctly. 5.1. Performance. After some experiments, we found that the optimal value was $P = 8$. This gave a kernel using 34 registers with an occupancy of 58% and a performance of 58GB/s. 6. FOURTH GPU STRATEGY At this point it was clear that a comprehensive re-thinking of the algorithm was needed. The fundamental problem seemed to be the level of indirection that is placed between the threads and all the memory operations. Before any memory is accessed, each thread first has to fetch the index at which the access is to occur, and must then use the index to access the data. Each memory operation therefore requires two trips to memory: one to fetch the index, and another to access the data using the index. This was slowing the whole process down. Constant memory is simply not fast enough to deliver the indexes at a rate which saturates the global memory bus. Synthetic experiments where each thread “computed” the indexes on-the-fly during execution came close to saturating the bus. The bottleneck therefore appeared to be fetching the indexes. Indeed, moving the indexes from constant memory into shared memory showed no improvement. Shared memory, just as constant memory, is too slow to provide the indexes at a rate which saturates the global memory bus. Since it is impossible to compute the indexes on-the-fly, a way around the bottleneck had to be found. Prefetching data seemed to provide some hope. However if we wish to prefetch Normal random numbers, then we should also be prefetching their indexes (since the index is needed to effect the load). And if we are prefetching those indexes, perhaps it makes sense to prefetch all the indexes. Therefore we changed the algorithm as follows: (a) Threads no longer cooperated to generate a multidimensional Brownian sample path. Each thread would compute all the dimensions of each sample path. The matrix $C$ was moved to registers so that the matrix-vector product $CZ_i$ could be performed on data held in registers. All the synchronisation barriers were removed. (b) Since $Z_i$ values were no longer read into shared memory, the local stack was moved from local memory into shared memory. (c) The execution strategy was moved from constant memory to shared memory. However since shared memory is much too small to hold it, the execution strategy would have to reside in global memory and sections of it would be copied into shared memory as needed. (d) The indexes defining the execution strategy were prefetched. At each step, 5 indexes are required, and two full steps worth of indexes were prefetched, meaning that 10 indexes were prefetched before the first Brownian point \( X_T \) was computed. (e) In addition to prefetching the indexes, \( P \) Normal random numbers were also prefetched from global memory. (f) The rest of the algorithm remained the same: left and right neighbours were read off the stack, a new Brownian point was computed, and it in turn was stored to the stack and to global memory. 6.1. **Performance.** The changes above led to an explosion in the complexity of the code. Although the ideas are relatively simple, when written down the code takes some time to understand. Not only must the prefetch registers be treated properly, but exceptional care must be taken with the corner cases and cleanup code which results from copying sections of the execution strategy into shared memory. This is compounded by the prefetching, since careful book keeping must be done to know which sections to copy. In summary, turning the idea into a robust implementation is not simple. The resulting kernel used 42 registers per thread, had an occupancy of 45% and ran at 85GB/s. <table> <thead> <tr> <th>GPU Strategy</th> <th>Registers</th> <th>Occupancy</th> <th>Performance</th> <th>% of Peak</th> </tr> </thead> <tbody> <tr> <td>First</td> <td>20</td> <td>100%</td> <td>29GB/s</td> <td>24.1%</td> </tr> <tr> <td>Second</td> <td>20 to 34</td> <td>100% to 54%</td> <td>&lt;29GB/s</td> <td>&lt;24%</td> </tr> <tr> <td>Third</td> <td>34</td> <td>58%</td> <td>58GB/s</td> <td>48.3%</td> </tr> <tr> <td>Fourth</td> <td>42</td> <td>45%</td> <td>85GB/s</td> <td>70.8%</td> </tr> <tr> <td>Fifth without</td> <td>24</td> <td>43%</td> <td>79GB/s</td> <td>65.8%</td> </tr> <tr> <td></td> <td>prefeching</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Fifth with prefeching</td> <td>33</td> <td>58%</td> <td>102GB/s</td> <td>85%</td> </tr> </tbody> </table> **Table 1.** Comparison of the different GPU strategies. 7. **Fifth GPU Strategy** The performance of the previous GPU kernel is only 70% of theoretical peak, and there is not much more scope for prefetching. The most likely bottleneck in the algorithm is the traffic into and out of the local stack, and the associated fetching of indexes. At each time step we read two values from the stack and write one value to it, which means fetching three indexes. The indexes also reside in shared memory, which means at each time step we load three values from shared memory, then use those values to load two more values and finally to store a value to shared memory. This traffic is too much for the shared memory bus to handle: it becomes the bottleneck in the program. Returning to the initialisation step, we considered whether it was possible to reduce the amount of local stack traffic. The answer turns out to be yes, but only by introducing some branches in the innermost loop of the generate function. Consider the situation in (5) where \( X_\ell \) and \( X_r \) are the left and right neighbours respectively of the Brownian point \( X_{\theta_i} \). By carefully tuning the execution strategy, it is possible to ensure that the set \( \{X_\ell, X_r, X_{\theta_i}\} \) contains both the left and right neighbours of the Brownian point \( X_{\theta_{i+1}} \). This cannot be done for every point \( X_{\theta_{i+1}} \), but it can be done for many of them, and so it is necessary to introduce some flags to identify when it holds. Correspondingly, the main compute loop of the generate step must contain some branches. The branches when computing the point \( X_{\theta_{i+1}} \) look roughly as follows: Are the left and right neighbours of \( X_{\theta_{i+1}} \) in the set \( \{X_\ell, X_r, X_{\theta_i}\} \)? ✓ Yes. • Identify which of the points $X_{\ell}$, $X_{r}$ and $X_{\theta_i}$ are the neighbours and use them to compute $X_{\theta_{i+1}}$. × No. • Read the left and right neighbours from the local stack and use them to compute $X_{\theta_{i+1}}$. Usually programmers would avoid branches such as this, especially in inner loops, but in this case it proves highly effective. 7.1. Performance. Without adding any prefetching, the kernel outlined above uses 24 registers, has an occupancy of 43% and runs at 79GB/s. Once the prefetching in Section 6 is added in, the kernel uses 33 registers, has an occupancy of 58% and runs at 102GB/s. The same code in double precision uses 38 registers, has an occupancy of 33% and runs at 115.66GB/s. 8. Summary of Results and Conclusions In summary, we state the performance results for both the Brownian sample path gen- erator and the scaled Brownian increments generator as measured on the system detailed in Section 2.4: • Brownian sample paths generator – Single Precision: runtime is 10.01ms at 102GB/s achieved global memory throughput. This is 1.9x faster than the time taken to generate the Normal random numbers.\(^1\) – Double Precision: runtime is 17.12ms at 115.6GB/s achieved global memory throughput. This is 2.2x faster than the time taken to generate the Normal random numbers.\(^2\) • Scaled Brownian increments generator – Single Precision: runtime is 9.09ms at 109GB/s achieved global memory throughput. This is 2.09x faster than the time taken to generate the Normal random numbers.\(^1\) – Double Precision: runtime is 17.01ms at 116.3GB/s achieved global memory throughput. This is 2.2x faster than the time taken to generate the Normal random numbers.\(^2\) The algorithm from Section 7 (without prefetching) was coded up in C, parallelised with OpenMP and run on a number of different x86-64 systems. The results are given in Table 2. The three CPU systems represent a range in performance, from the popular Intel Core i7 desktop processor to the top end Intel Xeon X5680 aimed at the server market. The AMD machine is a dual socket system featuring two Magny Cours chips, and is similar to the Phase 2 nodes in the UK HECToR supercomputer. The Xeon machine is also a dual socket system with two Westmere chips. Observe the scaling of the CPU code. The Core i7 shows limited scaling past 2 threads. In addition, with more than 2 threads the performance becomes highly variable (the values shown here are averages). The Xeon shows virtually no scaling past 8 threads. The AMD is the only system that shows scaling up to full hardware capacity. That said, the performance of the AMD system is not particularly exciting, being around 70% slower than the Xeon. \(^1\)Using the NAG GPU MRG32k3a generator, single precision \(^2\)Using the NAG GPU MRG32k3a generator, double precision Table 2. Benchmark figures for Tesla C2050 vs. several high performance CPUs. <table> <thead> <tr> <th>Generators</th> <th>Intel Core i7 860, 4 cores @ 2.8GHz (with hyperthreading)</th> <th>GPU speedup</th> </tr> </thead> <tbody> <tr> <td></td> <td>1 Thread</td> <td>2 Threads</td> </tr> <tr> <td>Bridge</td> <td>float</td> <td>1212.5ms</td> </tr> <tr> <td></td> <td>double</td> <td>1771.1ms</td> </tr> <tr> <td>Bridge Incs</td> <td>float</td> <td>1170.5ms</td> </tr> <tr> <td></td> <td>double</td> <td>1452.8ms</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Generators</th> <th>AMD Opteron 6174, 24 cores @ 2.2GHz (dual socket Magny Cours, 2 × 12 cores)</th> <th>GPU speedup</th> </tr> </thead> <tbody> <tr> <td></td> <td>1 Thread</td> <td>8 Threads</td> </tr> <tr> <td>Bridge</td> <td>float</td> <td>2402.6ms</td> </tr> <tr> <td></td> <td>double</td> <td>2948.8ms</td> </tr> <tr> <td>Bridge Incs</td> <td>float</td> <td>2173.6ms</td> </tr> <tr> <td></td> <td>double</td> <td>2694.1ms</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Generators</th> <th>Intel Xeon X5680, 12 cores @ 3.33GHz (dual socket Westmore, 2 × 6 cores with hyperthreading)</th> <th>GPU speedup</th> </tr> </thead> <tbody> <tr> <td></td> <td>1 Thread</td> <td>8 Threads</td> </tr> <tr> <td>Bridge</td> <td>float</td> <td>1392.1ms</td> </tr> <tr> <td></td> <td>double</td> <td>1463.8ms</td> </tr> <tr> <td>Bridge Incs</td> <td>float</td> <td>1207.4ms</td> </tr> <tr> <td></td> <td>double</td> <td>1308.1ms</td> </tr> </tbody> </table> The behaviour in Table 2 is typical for memory bandwidth bound applications. Throwing additional compute cores at the problem does not improve performance: the speed of the actual memory hardware must be increased. Here GPUs have a distinct advantage due to the fast GDDR5 graphics memory. 8.1. Conclusions. In the process of producing the Brownian bridge generators, we have learned a number of lessons regarding the “conventional wisdom” of programming GPUs: - Higher occupancy does not necessarily mean higher performance for memory bandwidth bound applications. - Explicit prefetching of data into registers can boost performance significantly. - Shared memory is not as fast as one might think. In particular, inserting a layer of indirection whereby an index is fetched from shared memory, and then that index is used to fetch another value from shared memory, can slow things down a lot. - Aggressive use of registers is a good way to boost performance, even for bandwidth bound applications, since registers are the fastest memory on the GPU. - Judicious use of branching can increase performance, even when the branch is in the innermost compute loop. - GPUs can be very effective at accelerating memory bandwidth bound applications, which often do not scale well on traditional multicore platforms. 8.2. Acknowledgments. The Numerical Algorithms Group wishes to thank Professor Mike Giles, whose work on a GPU Brownian bridge routine was invaluable in making the present implementation. NAG would also like to acknowledge the advice and feedback from two senior quantitative analysts who gave valuable insights into how a Brownian bridge algorithm is used in production. 8.3. Access to Software. Users who wish to obtain the Brownian bridge routines should contact NAG either through the website www.nag.co.uk, or via email at infodesk@nag.co.uk. Both GPU and CPU (single threaded) implementations are available in the NAG Numerical Routines for GPUs\textsuperscript{3}. The routine documentation is available at \[3\]. Example programs (including documentation) showing how to use a GPU Sobol generator together with a GPU Brownian bridge in order to create a GPU Monte Carlo pricing application is available at \[4\]. As well as featuring in the NAG Numerical Routines for GPUs, both serial and multi-threaded implementations of this very flexible Brownian bridge algorithm will also feature in future releases of NAGs CPU Libraries and the NAG Toolbox for MATLAB. Early releases may be made available upon request. References \url{http://developer.download.nvidia.com/CUDA/training/Optimizing_Mem_limited_kernels.mp4} \url{http://www.nag.co.uk/numeric/GPUs/doc.asp} \url{http://www.nag.co.uk/numeric/GPUs/gpu_demo_applications} Numerical Algorithms Group Ltd E-mail address: jacques@nag.co.uk \textsuperscript{3}http://www.nag.co.uk/numeric/gpus/index.asp
{"Source-Url": "https://www.nag.com/doc/techrep/pdf/tr2_12.pdf", "len_cl100k_base": 9457, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36006, "total-output-tokens": 10549, "length": "2e13", "weborganizer": {"__label__adult": 0.000518798828125, "__label__art_design": 0.0008144378662109375, "__label__crime_law": 0.0005183219909667969, "__label__education_jobs": 0.0006532669067382812, "__label__entertainment": 0.0001633167266845703, "__label__fashion_beauty": 0.0002694129943847656, "__label__finance_business": 0.0005083084106445312, "__label__food_dining": 0.0004200935363769531, "__label__games": 0.0012693405151367188, "__label__hardware": 0.01288604736328125, "__label__health": 0.0006809234619140625, "__label__history": 0.0005426406860351562, "__label__home_hobbies": 0.00020372867584228516, "__label__industrial": 0.0016336441040039062, "__label__literature": 0.00027108192443847656, "__label__politics": 0.0004696846008300781, "__label__religion": 0.0008392333984375, "__label__science_tech": 0.472412109375, "__label__social_life": 8.547306060791016e-05, "__label__software": 0.01184844970703125, "__label__software_dev": 0.4912109375, "__label__sports_fitness": 0.0004317760467529297, "__label__transportation": 0.0010671615600585938, "__label__travel": 0.0002753734588623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37398, 0.0396]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37398, 0.49167]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37398, 0.91035]], "google_gemma-3-12b-it_contains_pii": [[0, 2589, false], [2589, 6760, null], [6760, 10808, null], [10808, 14629, null], [14629, 18160, null], [18160, 21826, null], [21826, 25452, null], [25452, 29225, null], [29225, 32127, null], [32127, 36059, null], [36059, 37398, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2589, true], [2589, 6760, null], [6760, 10808, null], [10808, 14629, null], [14629, 18160, null], [18160, 21826, null], [21826, 25452, null], [25452, 29225, null], [29225, 32127, null], [32127, 36059, null], [36059, 37398, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37398, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37398, null]], "pdf_page_numbers": [[0, 2589, 1], [2589, 6760, 2], [6760, 10808, 3], [10808, 14629, 4], [14629, 18160, 5], [18160, 21826, 6], [21826, 25452, 7], [25452, 29225, 8], [29225, 32127, 9], [32127, 36059, 10], [36059, 37398, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37398, 0.12712]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b90654410d580c7c1e7992cacf8c5b32319e4325
Activity Report 2016 Project-Team SPADES Sound Programming of Adaptive Dependable Embedded Systems IN COLLABORATION WITH: Laboratoire d’Informatique de Grenoble (LIG) RESEARCH CENTER Grenoble - Rhône-Alpes THEME Embedded and Real-time Systems Table of contents 1. Members ................................................................................................................. 1 2. Overall Objectives ................................................................................................. 2 3. Research Program ................................................................................................. 2 3.1. Introduction ....................................................................................................... 2 3.2. Components and Contracts ........................................................................... 3 3.3. Real-Time Multicore Programming .................................................................. 3 3.4. Language-Based Fault Tolerance ..................................................................... 4 4. Application Domains ............................................................................................... 5 4.1. Industrial Applications .................................................................................... 5 4.2. Industrial Design Tools .................................................................................... 5 4.3. Current Industrial Cooperations ...................................................................... 5 5. New Software and Platforms .................................................................................. 5 6. New Results ............................................................................................................ 6 6.1. Components and contracts 6.1.1. Contracts for the negotiation of embedded software updates .................. 6 6.1.2. Location graphs ......................................................................................... 6 6.2. Real-Time multicore programming 6.2.1. Time predictable programming languages .............................................. 7 6.2.2. Modular distribution of synchronous programs ...................................... 7 6.2.3. Parametric dataflow models .................................................................... 8 6.2.4. Synthesis of switching controllers using approximately bisimilar multiscale abstractions .......................................................... 8 6.2.5. Schedulability of weakly-hard real-time systems ..................................... 9 6.3. Language Based Fault-Tolerance 6.3.1. Fault Ascription in Concurrent Systems .................................................. 9 6.3.2. Tradeoff exploration between energy consumption and execution time .......................................................... 10 6.3.3. Automatic transformations for fault tolerant circuits .............................. 10 6.3.4. Concurrent flexible reversibility ............................................................... 11 7. Bilateral Contracts and Grants with Industry ......................................................... 11 7.1. Bilateral Contracts with Industry .................................................................... 11 7.2. Bilateral Grants with Industry ........................................................................ 11 8. Partnerships and Cooperations .............................................................................. 11 8.1. Regional Initiatives ......................................................................................... 11 8.2. European Initiatives ....................................................................................... 12 8.3. International Initiatives .................................................................................. 12 8.4. International Research Visitors ...................................................................... 12 9. Dissemination .......................................................................................................... 13 9.1. Promoting Scientific Activities 9.1.1. Scientific events organisation .................................................................. 13 9.1.2. Scientific events selection ........................................................................ 13 9.1.2.1. Chair of conference program committees ........................................ 13 9.1.2.2. Member of conference program committees ..................................... 13 9.1.2.3. Reviewer ......................................................................................... 13 9.1.3. Journal 9.1.3.1. Member of the editorial boards ............................................................ 14 9.1.3.2. Reviewer - Reviewing activities ......................................................... 14 9.1.4. Research administration ......................................................................... 14 9.2. Teaching - Supervision - Juries 9.2.1. Teaching ................................................................................................. 14 9.2.2. Supervision ............................................................................................ 15 9.2.3. Juries .................................................. 15 9.3. Popularization ........................................... 15 10. Bibliography .......................................................... 15 Project-Team SPADES Creation of the Team: 2013 January 01, updated into Project-Team: 2015 July 01 Keywords: **Computer Science and Digital Science:** 1.1.1. - Multicore 1.1.9. - Fault tolerant systems 1.3. - Distributed Systems 2.1.1. - Semantics of programming languages 2.1.6. - Concurrent programming 2.1.8. - Synchronous languages 2.3. - Embedded and cyber-physical systems 2.3.1. - Embedded systems 2.3.2. - Cyber-physical systems 2.3.3. - Real-time systems 2.4.1. - Analysis 2.4.3. - Proofs 2.5.2. - Component-based Design **Other Research Topics and Application Domains:** 6.6. - Embedded systems 1. Members **Research Scientists** - Alain Girault [Team leader, Inria, Senior Researcher, HDR] - Pascal Fradet [Inria, Researcher, HDR] - Gregor Goessler [Inria, Researcher, HDR] - Sophie Quinton [Inria, Researcher] - Jean-Bernard Stefani [Inria, Senior Researcher] **Faculty Member** - Xavier Nicollin [Grenoble INP, Associate Professor] **PhD Students** - Yoann Geoffroy [Inria, until Dec. 2016] - Xiaojie Guo [UGA & PERSYVAL-Lab, from Dec. 2016] - Stephan Plassart [UGA & PERSYVAL-Lab, from Sep. 2016] - Christophe Prévot [Thales, granted by CIFRE] **Post-Doctoral Fellow** - Lijun Shan [Inria, from Nov. 2016] **Visiting Scientists** - Athena Abdi [Amirkabir U., until Jul. 2016] - Leonie Ahrendts [TU Braunschweig, Jan. and Jun. 2016] - Ismail Assayad [Casablanca U., Sep. 2016] - Zain Hammadeh [TU Braunschweig, Aug. 2016] 2. Overall Objectives 2.1. Overall Objectives The SPADES project-team aims at contributing to meet the challenge of designing and programming dependable embedded systems in an increasingly distributed and dynamic context. Specifically, by exploiting formal methods and techniques, SPADES aims to answer three key questions: 1. How to program open networked embedded systems as dynamic adaptive modular structures? 2. How to program reactive systems with real-time and resource constraints on multicore architectures? 3. How to program reliable, fault-tolerant embedded systems with different levels of criticality? These questions above are not new, but answering them in the context of modern embedded systems, which are increasingly distributed, open and dynamic in nature [31], makes them more pressing and more difficult to address: the targeted system properties – dynamic modularity, time-predictability, energy efficiency, and fault-tolerance – are largely antagonistic (e.g., having a highly dynamic software structure is at variance with ensuring that resource and behavioral constraints are met). Tackling these questions together is crucial to address this antagonism, and constitutes a key point of the SPADES research program. A few remarks are in order: - We consider these questions to be central in the construction of future embedded systems, dealing as they are with, roughly, software architecture and the provision of real-time and fault-tolerance guarantees. Building a safety-critical embedded system cannot avoid dealing with these three concerns. - The three questions above are highly connected. For instance, composability along time, resource consumption and reliability dimensions are key to the success of a component-based approach to embedded systems construction. - For us, “Programming” means any constructive process to build a running system. It can encompass traditional programming as well as high-level design or “model-based engineering” activities, provided that the latter are supported by effective compiling tools to produce a running system. - We aim to provide semantically sound programming tools for embedded systems. This translates into an emphasis on formal methods and tools for the development of provably dependable systems. 3. Research Program 3.1. Introduction The SPADES research program is organized around three main themes, Components and contracts, Real-time multicore programming, and Language-based fault tolerance, that seek to answer the three key questions identified in Section 2.1. We plan to do so by developing and/or building on programming languages and techniques based on formal methods and formal semantics (hence the use of “sound programming” in the project-team title). In particular, we seek to support design where correctness is obtained by construction, relying on proven tools and verified constructs, with programming languages and programming abstractions designed with verification in mind. 3.2. Components and Contracts Component-based construction has long been advocated as a key approach to the “correct-by-construction” design of complex embedded systems [65]. Witness component-based toolsets such as UC Berkeley’s PTOLEMY [53], Verimag’s BIP [36], or the modular architecture frameworks used, for instance, in the automotive industry (AUTOSAR) [28]. For building large, complex systems, a key feature of component-based construction is the ability to associate with components a set of contracts, which can be understood as rich behavioral types that can be composed and verified to guarantee a component assembly will meet desired properties. The goal in this theme is to study the formal foundations of the component-based construction of embedded systems, to develop component and contract theories dealing with real-time, reliability and fault-tolerance aspects of components, and to develop proof-assistant-based tools for the computer-aided design and verification of component-based systems. Formal models for component-based design are an active area of research (see e.g., [29], [30]). However, we are still missing a comprehensive formal model and its associated behavioral theory able to deal at the same time with different forms of composition, dynamic component structures, and quantitative constraints (such as timing, fault-tolerance, or energy consumption). Notions of contracts and interface theories have been proposed to support modular and compositional design of correct-by-construction embedded systems (see e.g., [40], [41] and the references therein), but having a comprehensive theory of contracts that deals with all the above aspects is still an open question [71]. In particular, it is not clear how to accommodate different forms of composition, reliability and fault-tolerance aspects, or to deal with evolving component structures in a theory of contracts. Dealing in the same component theory with heterogeneous forms of composition, different quantitative aspects, and dynamic configurations, requires to consider together the three elements that comprise a component model: behavior, structure and types. Behavior refers to behavioral (interaction and execution) models that characterize the behavior of components and component assemblages (e.g., transition systems and their multiple variants – timed, stochastic, etc.). Structure refers to the organization of component assemblages or configurations, and the composition operators they involve. Types refer to properties or contracts that can be attached to components and component interfaces to facilitate separate development and ensure the correctness of component configurations with respect to certain properties. Taking into account dynamicity requires to establish an explicit link between behavior and structure, as well as to consider higher-order systems, both of which have a direct impact on types. We plan to develop our component theory by progressing on two fronts: component calculi, and semantical framework. The work on typed component calculi aims to elicit process calculi that capture the main insights of component-based design and programming and that can serve as a bridge towards actual architecture description and programming language developments. The work on the semantical framework should, in the longer term, provide abstract mathematical models for the more operational and linguistic analysis afforded by component calculi. Our work on component theory will find its application in the development of a CoQ-based toolchain for the certified design and construction of dependable embedded systems, which constitutes our third main objective for this axis. 3.3. Real-Time Multicore Programming Programming real-time systems (i.e., systems whose correct behavior depends on meeting timing constraints) requires appropriate languages (as exemplified by the family of synchronous languages [39]), but also the support of efficient scheduling policies, execution time and schedulability analyses to guarantee real-time constraints (e.g., deadlines) while making the most effective use of available (processing, memory, or networking) resources. Schedulability analysis involves analyzing the worst-case behavior of real-time tasks under a given scheduling algorithm and is crucial to guarantee that time constraints are met in any possible execution of the system. Reactive programming and real-time scheduling and schedulability for multiprocessor systems are old subjects, but they are nowhere as mature as their uniprocessor counterparts, and still feature a number of open research questions [35], [48], in particular in relation with mixed criticality systems. The main goal in this theme is to address several of these open questions. We intend to focus on two issues: multicriteria scheduling on multiprocessors, and schedulability analysis for real-time multiprocessor systems. Beyond real-time aspects, multiprocessor environments, and multicore ones in particular, are subject to several constraints in conjunction, typically involving real-time, reliability and energy-efficiency constraints, making the scheduling problem more complex for both the offline and the online cases. Schedulability analysis for multiprocessor systems, in particular for systems with mixed criticality tasks, is still very much an open research area. Distributed reactive programming is rightly singled out as a major open issue in the recent, but heavily biased (it essentially ignores recent research in synchronous and dataflow programming), survey by Bainomugisha et al. [35]. For our part, we intend to focus on two questions: devising synchronous programming languages for distributed systems and precision-timed architectures, and devising dataflow languages for multiprocessors supporting dynamicity and parametricity while enjoying effective analyses for meeting real-time, resource and energy constraints in conjunction. 3.4. Language-Based Fault Tolerance Tolerating faults is a clear and present necessity in networked embedded systems. At the hardware level, modern multicore architectures are manufactured using inherently unreliable technologies [43], [58]. The evolution of embedded systems towards increasingly distributed architectures highlighted in the introductory section means that dealing with partial failures, as in Web-based distributed systems, becomes an important issue. While fault-tolerance is an old and much researched topic, several important questions remain open: automation of fault-tolerance provision, composable abstractions for fault-tolerance, fault diagnosis, and fault isolation. The first question is related to the old question of “system structure for fault-tolerance” as originally discussed by Randell for software fault tolerance [77], and concerns in part our ability to clearly separate fault-tolerance aspects from the design and programming of purely “functional” aspects of an application. The classical arguments in favor of a clear separation of fault-tolerance concerns from application code revolve around reduced code and maintenance complexity [49]. The second question concerns the definition of appropriate abstractions for the modular construction of fault-tolerant embedded systems. The current set of techniques available for building such systems spans a wide range, including exception handling facilities, transaction management schemes, rollback/recovery schemes, and replication protocols. Unfortunately, these different techniques do not necessarily compose well – for instance, combining exception handling and transactions is non trivial, witness the flurry of recent work on the topic, see e.g., [64] and the references therein –, they have no common semantical basis, and they suffer from limited programming language support. The third question concerns the identification of causes for faulty behavior in component-based assemblages. It is directly related to the much researched area of fault diagnosis, fault detection and isolation [66]. We intend to address these questions by leveraging programming language techniques (programming constructs, formal semantics, static analyses, program transformations) with the goal to achieve provable fault-tolerance, i.e., the construction of systems whose fault-tolerance can be formally ensured using verification tools and proof assistants. We aim in this axis to address some of the issues raised by the above open questions by using aspect-oriented programming techniques and program transformations to automate the inclusion of fault-tolerance in systems (software as well as hardware), by exploiting reversible programming models to investigate composable recovery abstractions, and by leveraging causality analyses to study fault-ascertainment in component-based systems. Compared to the huge literature on fault-tolerance in general, in particular in the systems area (see e.g., [59] for an interesting but not so recent survey), we find by comparison much less work exploiting formal language techniques and tools to achieve or support fault-tolerance. The works reported in [42], [44], [47], [54], [67], [76], [81] provide a representative sample of recent such works. A common theme in this axis is the use and exploitation of causality information. Causality, i.e., the logical dependence of an effect on a cause, has long been studied in disciplines such as philosophy [72], natural sciences, law [73], and statistics [74], but it has only recently emerged as an important focus of research in computer science. The analysis of logical causality has applications in many areas of computer science. For instance, tracking and analyzing logical causality between events in the execution of a concurrent system is required to ensure reversibility [70], to allow the diagnosis of faults in a complex concurrent system [61], or to enforce accountability [69], that is, designing systems in such a way that it can be determined without ambiguity whether a required safety or security property has been violated, and why. More generally, the goal of fault-tolerance can be understood as being to prevent certain causal chains from occurring by designing systems such that each causal chain either has its premises outside of the fault model (e.g., by introducing redundancy [59]), or is broken (e.g., by limiting fault propagation [78]). 4. Application Domains 4.1. Industrial Applications Our applications are in the embedded system area, typically: transportation, energy production, robotics, telecommunications, systems on chip (SoC). In some areas, safety is critical, and motivates the investment in formal methods and techniques for design. But even in less critical contexts, like telecommunications and multimedia, these techniques can be beneficial in improving the efficiency and the quality of designs, as well as the cost of the programming and the validation processes. Industrial acceptance of formal techniques, as well as their deployment, goes necessarily through their usability by specialists of the application domain, rather than of the formal techniques themselves. Hence, we are looking to propose domain-specific (but generic) realistic models, validated through experience (e.g., control tasks systems), based on formal techniques with a high degree of automation (e.g., synchronous models), and tailored for concrete functionalities (e.g., code generation). 4.2. Industrial Design Tools The commercially available design tools (such as UML with real-time extensions, MATLAB/ SIMULINK/ dSPACE 1) and execution platforms (OS such as VXWORKS, QNX, real-time versions of LINUX ...) start now to provide besides their core functionalities design or verification methods. Some of them, founded on models of reactive systems, come close to tools with a formal basis, such as for example STATEMATE by iLOGIX. Regarding the synchronous approach, commercial tools are available: SCADE 2 (based on LUSTRE), CONSTBILT and RT-BUILDER (based on SIGNAL) from GEENSYS 3 (part of DASSAULT SYSTEMES), specialized environments like CELLCONTROL for industrial automatism (by the INRIA spin-off ATHYS—now part of DASSAULT SYSTEMES). One can observe that behind the variety of actors, there is a real consistency of the synchronous technology, which makes sure that the results of our work related to the synchronous approach are not restricted to some language due to compatibility issues. 4.3. Current Industrial Cooperations Regarding applications and case studies with industrial end-users of our techniques, we cooperate with Thales on schedulability analysis for evolving or underspecified real-time embedded systems, with Orange Labs on software architecture for cloud services and with Daimler on reduction of nondeterminism and analysis of deadline miss models for the design of automotive systems. 5. New Software and Platforms 5.1. pyCPA_TWCA: A pyCPA plugin for computing deadline miss models FUNCTIONAL DESCRIPTION 1http://www.dspaceinc.com 2http://www.eserel-technologies.com 3http://www.geensoft.com We are developing pyCPA_TWCA, a pyCPA plugin for Typical Worst-Case Analysis as described in Section 6.2.5. pyCPA is an open-source Python implementation of Compositional Performance Analysis developed at TU Braunschweig, which allows in particular response-time analysis. pyCPA_TWCA is an extension of this tool that is co-developed by Sophie Quinton, Zain Hammadeh (TU Braunschweig) and Leonie Ahrendts (TU Braunschweig). It allows in particular the computation of weakly-hard guarantees for real-time tasks, i.e., the number of deadline misses out of a sequence of executions. This year, pyCPA_TWCA has been extended to task chains but remains limited to uniprocessor systems, scheduled according to static priority scheduling. A public release is planned but has not yet taken place. - Authors: Zain Hammadeh and Leonie Ahrendts and Sophie Quinton. - Contact: Sophie Quinton. 6. New Results 6.1. Components and contracts Participants: Alain Girault, Christophe Prévot, Sophie Quinton, Jean-Bernard Stefani. 6.1.1. Contracts for the negotiation of embedded software updates We address the issue of change after deployment in safety-critical embedded system applications in collaboration with Thales and also in the context of the CCC project (http://ccc-project.org/). The goal of CCC is to substitute lab-based verification with in-field formal analysis to determine whether an update may be safely applied. This is challenging because it requires an automated process able to handle multiple viewpoints such as functional correctness, timing, etc. For this purpose, we propose an original methodology for contract-based negotiation of software updates. The use of contracts allows us to cleanly split the verification effort between the lab and the field. In addition, we show how to rely on existing viewpoint-specific methods for update negotiation. We have validated our approach on a concrete example inspired by the automotive domain in collaboration with our German partners from TU Braunschweig [19]. In collaboration with Thales we mostly focus on timing aspects with the objective to anticipate at design time future software evolutions and identify potential schedulability bottlenecks. This year we have presented an approach to quantify the flexibility of a system with respect to timing. In particular we have shown that it is possible under certain conditions to identify the task that will directly induce the limitations on a possible software update. If performed at design time, such a result can be used to adjust the system design by giving more slack to the limiting task [21]. 6.1.2. Location graphs The design of configurable systems can be streamlined and made more systematic by adopting a component-based structure, as demonstrated with the FRACTAL component model [2]. However, the formal foundations for configurable component-based systems, featuring higher-order capabilities where components can be dynamically instantiated and passivated, and non-hierarchical structures where components can be contained in different composites at the same time, are still an open topic. We have recently introduced the location graph model [79], where components are understood as graphs of locations hosting higher-order processes, and where component structures can be arbitrary graphs. We have continued the development of location graphs, revisiting the underlying structural model (hypergraphs instead of graphs), and simplifying its operational semantics while preserving the model expressivity. Towards the development of a behavioral theory of location graphs, we have defined different notions of bisimilarity for location graphs and shown them to be congruences, although a fully fledged co-inductive characterization of contextual equivalence for location graphs is still in the works. This work has not yet been published. 6.2. Real-Time multicore programming Participants: Pascal Fradet, Alain Girault, Gregor Goessler, Xavier Nicollin, Sophie Quinton. 6.2.1. Time predictable programming languages Time predictability (PRET) is a topic that emerged in 2007 as a solution to the ever increasing unpredictability of today’s embedded processors, which results from features such as multi-level caches or deep pipelines [52]. For many real-time systems, it is mandatory to compute a strict bound on the program’s execution time. Yet, in general, computing a tight bound is extremely difficult [82]. The rationale of PRET is to simplify both the programming language and the execution platform to allow more precise execution times to be easily computed [34]. Following our past results on the PRET-C programming language [32], we have proposed a time predictable synchronous programming language for multicores, called FOREC. It extends C with a small set of ESTEREL-like synchronous primitives to express concurrency, interaction with the environment, looping, and a synchronization barrier [83] (like the pause statement in ESTEREL). FOREC threads communicate with each other via shared variables, the values of which are combined at the end of each tick to maintain deterministic execution. We provide several deterministic combine policies for shared variables, in a way similar as concurrent revisions [45]. Thanks to this, it benefits from a deterministic semantics. FOREC is compiled into threads that are then statically scheduled for a target multicore chip. Our WCET analysis takes into account the access to the shared TDMA bus and the necessary administration for the shared variables. We achieve a very precise WCET (the over-approximation being less than 2%) thanks to a reachable space exploration of the threads’ states [15]. We have published a research report presenting the complete semantics and the compiler [27], and submitted it to a journal. Furthermore, we have extended the PRET-C compiler [32] in order to make it energy aware. To achieve this, we use dynamic voltage and frequency scaling (DFVS) and we insert DVFS control points in the control flow graph of the PRET-C program. The difficulty is twofold: first the control flow graph is concurrent, and second resulting optimization problem is in the 2D space (time, energy). Thanks to a novel ILP formulation and to a bicriteria heuristic, we are able to address the two objectives jointly and to compute, for each PRET-C program, the Pareto front of the non-dominated solutions in the 2D space (time, energy) [20]. This is a collaboration with Eugene Yip from Bamberg University, and with Partha Roop and Jiajie Wang from the University of Auckland. 6.2.2. Modular distribution of synchronous programs Synchronous programming languages describe functionally centralized systems, where every value, input, output, or function is always directly available for every operation. However, most embedded systems are nowadays composed of several computing resources. The aim of this work is to provide a language-oriented solution to describe functionally distributed reactive systems. This research started within the Inria large scale action SYNCHRONICS and is a joint work with Marc Pouzet (ENS, PARKAS team from Rocquencourt) and Gwenaël Delaval (UGA, CTRL-A team from Grenoble). We are working on defining a fully-conservative extension of a synchronous data-flow programming language (the HEPTAGON language, inspired from LUCID SYNCHRONE [46]). The extension, by means of annotations adds abstract location parameters to functions, and communications of values between locations. At deployment, every abstract location is assigned an actual one; this yields an executable for each actual computing resource. Compared to the PhD of Gwenaël Delaval [50], [51], the goal here is to achieve modular distribution even in the presence of non-static clocks, i.e., clocks defined according to the value of inputs. By fully-conservative, we have three aims in mind: 1. A non-annotated (i.e., centralized) program will be compiled exactly as before; 2. An annotated program eventually deployed onto only one computing location will behave exactly as its centralized counterpart; 3. The input-output semantics of a distributed program is the same as its centralized counterpart. By modular, we mean that we want to compile each function of the program into a single function capable of running on any computing location. At deployment, the program of each location may be optimized (by simple Boolean-constant-propagation, dead-code and unused-variable elimination), yielding different optimized code for each computing location. We have formalized the type-system for inferring the location of each variable and computation. In the presence of local clocks, added information is computed from the existing clock-calculus and the location-calculus, to infer necessary communication of clocks between location. All pending theoretical and technical issues have been answered, and the new compiler is being implemented, with new algorithms for deployment (and code optimization), achieving the three aims detailed above. ### 6.2.3. Parametric dataflow models Recent data-flow programming environments support applications whose behavior is characterized by dynamic variations in resource requirements. The high expressive power of the underlying models (e.g., Kahn Process Networks or the CAL actor language) makes it challenging to ensure predictable behavior. In particular, checking liveness (i.e., no part of the system will deadlock) and boundedness (i.e., the system can be executed in finite memory) is known to be hard or even undecidable for such models. This situation is troublesome for the design of high-quality embedded systems. Recently, we have introduced the Schedulable Parametric Data-Flow (SPDF) MoC for dynamic streaming applications [55], which extends the standard dataflow model by allowing rates to be parametric, and the Boolean Parametric Data Flow (BPDF) MoC [38], [37] which combines integer parameters (to express dynamic rates) and boolean parameters (to express the activation and deactivation of communication channels). In the past years, several other parametric dataflow MoCs have been presented. All these models aim at providing an interesting trade-off between analyzability and expressiveness. They offer a controlled form of dynamism under the form of parameters (e.g., parametric rates), along with run-time parameter configuration. We have written a survey which provides a comprehensive description of the existing parametric dataflow MoCs (constructs, constraints, properties, static analyses) and compares them using a common example [11]. The main objectives are to help designers of streaming applications to choose the most suitable model for their needs and to pave the way for the design of new parametric MoCs. We have also studied symbolic analyses of data-flow graphs [24], [16], [17], [12]. Symbolic analyses express the system performance as a function of parameters (i.e., input and output rates, execution times). Such functions can be quickly evaluated for each different configuration or checked w.r.t. different quality-of-service requirements. These analyses are useful for parametric MoCs, partially specified graphs, and even for completely static SDF graphs. We provide symbolic analyses for computing the maximal throughput of acyclic synchronous dataflow graphs, the minimum required buffers for which as soon as possible (asap) scheduling achieves this throughput, and finally the corresponding input-output latency of the graph. We first investigate these problems for a single parametric edge. The results are then extended to general acyclic graphs using linear approximation techniques. We assess the proposed analyses experimentally on both synthetic and real benchmarks. ### 6.2.4. Synthesis of switching controllers using approximately bisimilar multiscale abstractions The use of discrete abstractions for continuous dynamics has become standard in hybrid systems design (see e.g., [80] and the references therein). The main advantage of this approach is that it offers the possibility to leverage controller synthesis techniques developed in the areas of supervisory control of discrete-event systems [75]. The first attempts to compute discrete abstractions for hybrid systems were based on traditional systems behavioral relationships such as simulation or bisimulation, initially proposed for discrete systems most notably in the area of formal methods. These notions require inclusion or equivalence of observed behaviors which is often too restrictive when dealing with systems observed over metric spaces. For such systems, a more natural abstraction requirement is to ask for closeness of observed behaviors. This leads to the notions of approximate simulation and bisimulation introduced in [56]. These approaches are based on sampling of time and space where the sampling parameters must satisfy some relation in order to obtain abstractions of a prescribed precision. In particular, the smaller the time sampling parameter, the finer the lattice used for approximating the state-space; this may result in abstractions with a very large number of states when the sampling period is small. However, there are a number of applications where sampling has to be fast; though this is generally necessary only on a small part of the state-space. We have been exploring two approaches to overcome this state-space explosion [5]. We are currently investigating an approach using mode sequences of given length as symbolic states for our abstractions. By using mode sequences of variable length we are able to adapt the granularity of our abstraction to the dynamics of the system, so as to automatically trade off precision against controllability of the abstract states. 6.2.5. Schedulability of weakly-hard real-time systems We focus on the problem of computing tight deadline miss models for real-time systems, which bound the number of potential deadline misses in a given sequence of activations of a task. In practical applications, such guarantees are often sufficient because many systems are in fact not hard real-time [4]. Our major contribution this year is the extension of our method for computing deadline miss models, called Typical Worst-Case Analysis (TWCA), to systems with task dependencies. This allows us to provide bounds on deadline misses for systems which until now could not be analyzed [18]. In parallel, we have developed an extension of sensitivity analysis for budgeting in the design of weakly-hard real-time systems. During design, it often happens that some parts of a task set are fully specified while other parameters, e.g. regarding recovery or monitoring tasks, will be available only much later. In such cases, sensitivity analysis can help anticipate how these missing parameters can influence the behavior of the whole system so that a resource budget can be allocated to them. We have developed an extension of sensitivity analysis for deriving task budgets for systems with hard and weakly-hard requirements. This approach has been validated on synthetic test cases and a realistic case study given by our partner Thales. This work will be submitted soon. Finally, in collaboration with TU Braunschweig and Daimler we have investigated the use of TWCA in conjunction with the Logical Execution Time paradigm [68] according to which data are read and written at predefined time instants. In particular, we have extended TWCA to different deadline miss handling strategies. This work has not been published yet. 6.3. Language Based Fault-Tolerance Participants: Pascal Fradet, Alain Girault, Yoann Geoffroy, Gregor Goessler, Jean-Bernard Stefani, Martin Vassor, Athena Abdi. 6.3.1. Fault Ascription in Concurrent Systems The failure of one component may entail a cascade of failures in other components; several components may also fail independently. In such cases, elucidating the exact scenario that led to the failure is a complex and tedious task that requires significant expertise. The notion of causality (did an event \( e \) cause an event \( e' \)?) has been studied in many disciplines, including philosophy, logic, statistics, and law. The definitions of causality studied in these disciplines usually amount to variants of the counterfactual test “\( e \) is a cause of \( e' \) if both \( e \) and \( e' \) have occurred, and in a world that is as close as possible to the actual world but where \( e \) does not occur, \( e' \) does not occur either”. In computer science, almost all definitions of logical causality — including the landmark definition of [63] and its derivatives — rely on a causal model that may not be known, for instance in presence of black-box components. For such systems, we have been developing a framework for blaming that helps us establish the causal relationship between component failures and system failures, given an observed system execution trace. The analysis is based on a formalization of counterfactual reasoning [7]. In his PhD thesis, Yoann Geoffroy proposed a generalization of our fault ascription technique to systems composed of black-box and white-box components. For the latter a faithful behavioral model is given but no specification. The approach leverages results from game theory and discrete controller synthesis to define several notions of causality. We are currently working on an instantiation of our general semantic framework for fault ascription in [60] to acyclic models of computation, in order to compare our approach with the standard definition of actual causality proposed by Halpern and Pearl. ### 6.3.2. Tradeoff exploration between energy consumption and execution time We have continued our work on multi-criteria scheduling, in two directions. First, in the context of dynamic applications that are launched and terminated on an embedded homogeneous multi-core chip, under execution time and energy consumption constraints, we have proposed a two layer adaptive scheduling method. In the first layer, each application (represented as a DAG of tasks) is scheduled statically on subsets of cores: 2 cores, 3 cores, 4 cores, and so on. For each size of these sets (2, 3, 4, ...), there may be only one topology or several topologies. For instance, for 2 or 3 cores there is only one topology (a “line”), while for 4 cores there are three distinct topologies (“line”, “square”, and “T shape”). Moreover, for each topology, we generate statically several schedules, each one subject to a different total energy consumption constraint, and consequently with a different Worst-Case Reaction Time (WCRT). Coping with the energy consumption constraints is achieved thanks to Dynamic Frequency and Voltage Scaling (DVFS). In the second layer, we use these pre-generated static schedules to reconfigure dynamically the applications running on the multi-core each time a new application is launched or an existing one is stopped. The goal of the second layer is to perform a dynamic global optimization of the configuration, such that each running application meets a pre-defined quality-of-service constraint (translated into an upper bound on its WCRT) and such that the total energy consumption be minimized. For this, we (i) allocate a sufficient number of cores to each active application, (ii) allocate the unassigned cores to the applications yielding the largest gain in energy, and (iii) choose for each application the best topology for its subset of cores (i.e., better than the by default “line” topology). This is a joint work with Ismail Assayad (U. Casablanca, Morocco) who visited the team in September 2015. Second, in the context of a static application (again represented a DAG of tasks) running on an homogeneous multi-core chip, we have worked on the static scheduling minimizing the WCRT of the application under the multiple constraints that the reliability, the power consumption, and the temperature remain below some given thresholds. There are multiple difficulties: (i) the reliability is not an invariant measure w.r.t. time, which makes it impossible to use backtrack-free scheduling algorithms such as list scheduling [33]; to overcome this, we adopt instead the Global System Failure Rate (GSFR) as a measure of the system’s reliability, which is invariant with time [57]; (ii) keeping the power consumption under a given threshold requires to lower the voltage and frequency, but this has a negative impact both on the WCRT and on the GSFR; keeping the GSFR below a given threshold requires to replicate the tasks on multiple cores, but this has a negative impact both on the WCRT, on the power consumption, and on the temperature; (iii) keeping the temperature below a given threshold is even more difficult because the temperature continues to increase even after the activity stops, so each scheduling decision must be assessed not based on the current state of the chip (i.e., the temperature of each core) but on the state of the chip at the end of the candidate task, and cooling slacks must be inserted. We have proposed a multi-criteria scheduling heuristics to address these challenges. It produces a static schedule of the given application graph and the given architecture description, such that the GSFR, power, and temperature thresholds are satisfied, and such that the execution time is minimized. We then combine our heuristic with a variant of the ε-constraint method [62] in order to produce, for a given application graph and a given architecture description, its entire Pareto front in the 4D space (exec. time, GSFR, power, temp.). This is a joint work with Athena Abdi and Hamid Zarandi from Amirkabir U., Iran, who have visited the team in 2016. ### 6.3.3. Automatic transformations for fault tolerant circuits In the past years, we have studied the implementation of specific fault tolerance techniques in real-time embedded systems using program transformation [1]. We are now investigating the use of automatic transformations to ensure fault-tolerance properties in digital circuits. To this aim, we consider program transformations for hardware description languages (HDL). We consider both single-event upsets (SEU) and single-event transients (SET) and fault models of the form “at most 1 SEU or SET within n clock cycles”. We have expressed several variants of triple modular redundancy (TMR) as program transformations. We have proposed a verification-based approach to minimize the number of voters in TMR [25]. Our technique guarantees that the resulting circuit (i) is fault tolerant to the soft-errors defined by the fault model and (ii) is functionally equivalent to the initial one. Our approach operates at the logic level and takes into account the input and output interface specifications of the circuit. Its implementation makes use of graph traversal algorithms, fixed-point iterations, and BDDs. Experimental results on the ITC’99 benchmark suite indicate that our method significantly decreases the number of inserted voters which entails a hardware reduction of up to 55% and a clock frequency increase of up to 35% compared to full TMR. We address scalability issues arising from formal verification with approximations and assess their efficiency and precision. As our experiments show, if the SEU fault-model is replaced with the stricter fault-model of SET, it has a minor impact on the number of removed voters. On the other hand, BDD-based modeling of SET effects represents a more complex task than the modeling of an SEU as a bit-flip. We propose solutions for this task and explain the nature of encountered problems. We discuss scalability issues arising from formal verification with approximations and assess their efficiency and precision. 6.3.4. Concurrent flexible reversibility Reversible concurrent models of computation provide natively what appears to be very fine-grained checkpoint and recovery capabilities. We have made this intuition clear by formally comparing a distributed algorithm for checkpointing and recovery based on causal information, and the distributed backtracking algorithm that lies at the heart of our reversible higher-order pi-calculus. We have shown that (a variant of) the reversible higher-order calculus with explicit rollback can faithfully encode a distributed causal checkpoint and recovery algorithm. The reverse is also true but under precise conditions, which restrict the ability to rollback a computation to an identified checkpoint. This work has currently not been published. 7. Bilateral Contracts and Grants with Industry 7.1. Bilateral Contracts with Industry - INRIA and Orange Labs have established this year a joint virtual research laboratory, called I/O LAB. We have been heavily involved in the creation of the laboratory and are actively involved in its operation (Jean-Bernard Stefani is one of the two co-directors of the lab). I/O LAB focuses on the network virtualization and cloudification. As part of the work of I/O LAB, we have cooperated with Orange Lab, as part of a cooperative research contract funded by Orange, on defining architectural principles and frameworks for network cloud infrastructures encompassing control and management of computing, storage and network resources. - With Daimler (subcontracting via iUTBS): We have shown how to extend our current method for computing deadline miss models to real-time systems designed according to the Logical Execution Time paradigm. 7.2. Bilateral Grants with Industry With Thales: Early Performance assessment for evolving and variable Cyber-Physical Systems. This CIFRE grant funds the PhD of Christophe Prévot. 8. Partnerships and Cooperations 8.1. Regional Initiatives 8.1.1. CASERM (PERSYVAL-Lab project) Participants: Pascal Fradet, Alain Girault, Gregor Goessler, Xiaojie Guo, Xavier Nicollin, Stephan Plassart, Sophie Quinton, Jean-Bernard Stefani. Despite recent advances, there exist currently no integrated formal methods and tools for the design and analysis of reconfigurable multi-view embedded systems. This is the goal of the CASERM project. The CASERM project represents a significant effort towards a COQ-based design method for reconfigurable multi-view embedded systems, in order to formalize the structure and behavior of systems and to prove their main properties. The use of a proof assistant to support such a framework is motivated by the fact that the targeted systems are both extremely complex and critical. The challenges addressed are threefold: 1. to model software architectures for embedded systems taking into account their dynamicity and multiple constraints (functional as well as non functional); 2. to propose novel scheduling techniques for dynamically reconfiguring embedded systems; and 3. to advance the state of the art in automated proving for such systems. The objectives of CASERM that address these challenges are organized in three tasks. They consist respectively in designing an architecture description framework based on a process calculus, in proposing online optimization methods for dynamic reconfiguration systems (this is the topic of Stephan Plassart’s PhD), and in developing a formal framework for real-time analysis in the COQ proof assistant (this is the topic of Xiaojie Guo’s PhD). A fourth task focuses on common case studies for the evaluation of the obtained results. The CASERM consortium gathers researchers from the G-SCOP, LIG and VERimag laboratories who are reknown specialists in these fields. The project started in November 2016 and will last three years. 8.2. European Initiatives 8.2.1. Collaborations with Major European Organizations We have a strong collaboration with the Technische Universität Braunschweig in Germany. In particular, Sophie Quinton is involved in the CCC project (http://ccc-project.org/) to provide methods and mechanisms for the verification of software updates after deployment in safety-critical systems and in the TypicalCPA project which aims at computing deadline miss models for distributed systems. We also a recent collaboration with the MPI-SWS in Kaiserslautern (Germany) on formal proofs for real-time systems. 8.3. International Initiatives 8.3.1. Inria Associate Teams Not Involved in an Inria International Labs 8.3.1.1. Causalysis Title: Causality Analysis for Safety-Critical Embedded Systems International Partner (Institution - Laboratory - Researcher): University of Pennsylvania (United States) - PRECISE center - Oleg Sokolsky Start year: 2015 See also: https://team.inria.fr/causalysis/ Today’s embedded systems become more and more complex, while an increasing number of safety-critical functions rely on them. Determining the cause(s) of a system-level failure and elucidating the exact scenario that led to the failure is today a complex and tedious task that requires significant expertise. The CAUSALYSIS project will develop automated approaches to causality analysis on execution logs. 8.4. International Research Visitors 8.4.1. Visits of International Scientists 8.4.1.1. Internships • Athena Abdi has been a visitor in the team from October 2015 to June 2016. She is doing her PhD at the Amirkabir University of Technology in Teheran, Iran. In the SPADES team, she is working on multi-criteria scheduling for real-time embedded systems, addressing the complex interplay between reliability, power consumption, temperature, and execution time (see 6.3.2). • Ismail Assayad has been a visitor in the team in September 2015. He is assistant professor at the University of Casablanca, Morocco. In the SPADES team, he is working on adaptive scheduling methods and admission control for dynamic embedded applications (see 6.3.2). 9. Dissemination 9.1. Promoting Scientific Activities 9.1.1. Scientific events organisation 9.1.1.1. Member of organizing committees • Sophie Quinton was artifact evaluation chair of the 24th International Conference on Real-Time Networks and Systems (RTNS’16). • Sophie Quinton was demo chair of the 22nd IEEE Real-Time Embedded Technology & Applications Symposium (RTAS’16) • Sophie Quinton was co-chair of the 1st Tutorial on Tools for Real-Time Systems (TuToR’16), held as a satellite event of CPSWeek’16. http://tutor2016.inria.fr/ • Sophie Quinton was co-organizer of the 1st Workshop on Collaboration of Academia and Industry for Real World Embedded Systems (CAIRES’16), held as a satellite event of ESWeek’16. http://caires2016.inria.fr/ 9.1.2. Scientific events selection 9.1.2.1. Chair of conference program committees • Gregor Gössler was co-chair of the 1st international Workshop on Causal Reasoning for Embedded and safety-critical Systems Technologies (CREST’16) [22], held as a satellite event of ETAPS’16. http://crest2016.inria.fr • Sophie Quinton was co-chair of the 7th International Workshop on Analysis Tools and Methodologies for Embedded and Real-time Systems (WATERS’16), held as a satellite event of ECRTS’16. http://waters2016.inria.fr 9.1.2.2. Member of conference program committees • Pascal Fradet served in the program committee of the 15th International Conference on Modularity (MODULARITY’16). • Alain Girault served in the program committees of the International Conference on Design and Test in Europe (DATE’16), the Embedded Software conference (EMSOFT’16), and the International Symposium on Industrial Embedded Systems (SIES’16). • Sophie Quinton served in the program committees of the 28th Euromicro Conference on Real-Time Systems (ECRTS’16), the 24th International Conference on Real-Time Networks and Systems (RTNS’16), the 4th International Workshop on Mixed Criticality Systems (WMC’16), the 10th Junior Researcher Workshop on Real-Time Computing (JRWRTC’16), and in the artifact evaluation committees of ECRTS’16 and the IEEE Real-Time Systems Symposium (RTSS’16). • Jean-Bernard Stefani served on the program committees of the 36th IFIP International Conference on Formal Techniques for Distributed Objects, Components and Systems (FORTE) and the 8th Conference on Reversible Computation. 9.1.2.3. Reviewer • Alain Girault reviewed an article for ECRTS’16. • Gregor Gössler reviewed articles for EMSOFT’16, FACS’16, and RTNS’16. • Xavier Nicollin reviewed an article for SIES’16. • Sophie Quinton reviewed articles for EMSOFT’16 and DATE’17. 9.1.3. Journal 9.1.3.1. Member of the editorial boards • Alain Girault is a member of the editorial board of the EURASIP Journal on Embedded Systems. • Jean-Bernard Stefani is a member of the editorial board of Annals of Telecommunications. 9.1.3.2. Reviewer – Reviewing activities • Alain Girault reviewed articles for ACM TECS, Parallel Computing, Embedded Systems Letters, and Microprocessors and Microsystems. • Gregor Gössler reviewed articles for Formal Methods in System Design (FMSD) and IEEE Transactions on Automatic Control (TAC). • Jean-Bernard Stefani reviewed articles for Theoretical Computer Science (TCS) and Science of Computer Programming (SCP). 9.1.4. Research administration • Pascal Fradet is head of the committee for doctoral studies (“Responsable du comité des études doctorales”) of the INRIA Grenoble – Rhône-Alpes research center and local correspondent for the young researchers INRIA mission (mission jeunes chercheurs). • Alain Girault is Vice Chair of the INRIA Evaluation Committee. As such, he co-organizes in particular the evaluation seminars of the INRIA teams (twice a year) and all the juries for the hiring and promotion of INRIA’s researchers (CR2, CR1, DR2, DR1, and DR0). • Jean-Bernard Stefani is Head of science of the INRIA Grenoble – Rhône-Alpes research center. As such, he manages with the research center director all aspects of the scientific life of the research center (creation of the research teams and their evaluation by international panels, scientific relationships with our academic and industrial partners, hiring of the new junior researchers, ...). • Jean-Bernard Stefani is co-director of I/O LAB, the joint research laboratory with Orange Lab. 9.2. Teaching - Supervision - Juries 9.2.1. Teaching Licence : Pascal Fradet, Théorie des Langages 1 & 2, 36 HeqTD, niveau L3, Grenoble INP (Ensimag), France Licence : Gregor Gössler, Théorie des Langages 2, 36 HeqTD, niveau L3, Grenoble INP (Ensimag), France Master : Xavier Nicollin, Sémantique et Analyse des Programmes, 11,25 HeqTD, niveau M1, Grenoble INP (Ensimag), France Licence : Xavier Nicollin, Théorie des Langages 2, 36 HeqTD, niveau L3, Grenoble INP (Ensimag), France Licence : Xavier Nicollin, Bases de la Programmation Impérative, 66 HeqTD, niveau L3, Grenoble INP (Ensimag), France Licence : Sophie Quinton, Théorie des Langages 2, 18 HeqTD, niveau L3, Grenoble INP (Ensimag), France Master : Jean-Bernard Stefani, Formal Aspects of Component Software, 9h, MOSIG, Univ. Grenoble Alpes, France 9.2.2. Supervision - PhD in progress: Sihem Cherrared, “Fault Management in Multi-Tenant Programmable Networks”, Univ. Rennes 1, since October 2016, co-advised by Eric Fabre and Gregor Gössler. 9.2.3. Juries - Alain Girault was president of the HDR jury of Goran Frehse (Univ. Grenoble Alpes). - Sophie Quinton was member of the PhD jury of Houssam Zahaf (U. Lille). - Jean-Bernard Stefani was president of the HDR jury of Tom Hirschowitz (U. Savoie). 9.3. Popularization 10. Bibliography **Major publications by the team in recent years** Publications of the year Articles in International Peer-Reviewed Journals Invited Conferences International Conferences with Proceedings [17] A. BOUAKAZ, P. FRADET, A. GIRAULT. Symbolic computation of the latency for dataflow graphs, in "Integrating Dataflow, Embedded computing and Architecture (IDEA’2016)”, Vienne, Austria, April 2016, https://hal.inria.fr/hal-01417111 Conferences without Proceedings Scientific Books (or Scientific Book chapters) Books or Proceedings Editing Research Reports **References in notes** [31] ARTEMIS JOINT UNDERTAKING. *ARTEMIS Strategic Research Agenda*, 2011 [38] V. BEBELIS, P. FRADET, A. GIRAULT, B. LAVIGUEUR. *BPDF: A Statically Analyzable Dataflow Model with Integer and Boolean Parameters*, in "International Conference on Embedded Software, EMSOFT’13", Montreal, Canada, ACM, September 2013
{"Source-Url": "http://raweb.inria.fr/rapportsactivite/RA2016/spades/spades.pdf", "len_cl100k_base": 12117, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 64601, "total-output-tokens": 19921, "length": "2e13", "weborganizer": {"__label__adult": 0.0005936622619628906, "__label__art_design": 0.0012502670288085938, "__label__crime_law": 0.0005145072937011719, "__label__education_jobs": 0.00319671630859375, "__label__entertainment": 0.0002409219741821289, "__label__fashion_beauty": 0.0003235340118408203, "__label__finance_business": 0.0005092620849609375, "__label__food_dining": 0.0005269050598144531, "__label__games": 0.0017862319946289062, "__label__hardware": 0.0070953369140625, "__label__health": 0.0011243820190429688, "__label__history": 0.0009560585021972656, "__label__home_hobbies": 0.00029349327087402344, "__label__industrial": 0.0011577606201171875, "__label__literature": 0.0006079673767089844, "__label__politics": 0.000682830810546875, "__label__religion": 0.0012102127075195312, "__label__science_tech": 0.3955078125, "__label__social_life": 0.00019550323486328125, "__label__software": 0.006877899169921875, "__label__software_dev": 0.5732421875, "__label__sports_fitness": 0.000453948974609375, "__label__transportation": 0.0016126632690429688, "__label__travel": 0.000301361083984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77394, 0.0431]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77394, 0.19216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77394, 0.83804]], "google_gemma-3-12b-it_contains_pii": [[0, 248, false], [248, 248, null], [248, 5460, null], [5460, 5674, null], [5674, 7163, null], [7163, 10148, null], [10148, 14928, null], [14928, 19929, null], [19929, 23244, null], [23244, 27245, null], [27245, 31445, null], [31445, 36054, null], [36054, 40275, null], [40275, 45380, null], [45380, 49163, null], [49163, 52339, null], [52339, 55346, null], [55346, 58106, null], [58106, 60960, null], [60960, 63797, null], [63797, 66642, null], [66642, 69336, null], [69336, 72053, null], [72053, 75062, null], [75062, 77394, null]], "google_gemma-3-12b-it_is_public_document": [[0, 248, true], [248, 248, null], [248, 5460, null], [5460, 5674, null], [5674, 7163, null], [7163, 10148, null], [10148, 14928, null], [14928, 19929, null], [19929, 23244, null], [23244, 27245, null], [27245, 31445, null], [31445, 36054, null], [36054, 40275, null], [40275, 45380, null], [45380, 49163, null], [49163, 52339, null], [52339, 55346, null], [55346, 58106, null], [58106, 60960, null], [60960, 63797, null], [63797, 66642, null], [66642, 69336, null], [69336, 72053, null], [72053, 75062, null], [75062, 77394, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77394, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77394, null]], "pdf_page_numbers": [[0, 248, 1], [248, 248, 2], [248, 5460, 3], [5460, 5674, 4], [5674, 7163, 5], [7163, 10148, 6], [10148, 14928, 7], [14928, 19929, 8], [19929, 23244, 9], [23244, 27245, 10], [27245, 31445, 11], [31445, 36054, 12], [36054, 40275, 13], [40275, 45380, 14], [45380, 49163, 15], [49163, 52339, 16], [52339, 55346, 17], [55346, 58106, 18], [58106, 60960, 19], [60960, 63797, 20], [63797, 66642, 21], [66642, 69336, 22], [69336, 72053, 23], [72053, 75062, 24], [75062, 77394, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77394, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6758e1b03ca8b5894eb64c900b6d03361f0ff9f8
Rethinking Support for Region Conflict Exceptions Swarndeu Biswas Indian Institute of Technology Kanpur, India swarnden@cs.iitk.ac.in Rui Zhang Ohio State University Columbus, OH, USA zhang.5944@osu.edu Michael D. Bond Ohio State University Columbus, OH, USA mikebond@cs.ohio-state.edu Brandon Lucia Carnegie Mellon University Pittsburgh, PA, USA blucia@cmu.edu Abstract—Current shared-memory systems provide well-defined execution semantics only for data-race-free execution. A state-of-the-art technique called Conflict Exceptions (CE) extends M(O)ESI-based coherence to provide defined semantics to all program executions. However, CE incurs significant performance costs because of its need to frequently access metadata in memory. In this work, we explore designs for practical architecture support for region conflict exceptions. First, we propose an on-chip metadata cache called access information memory (AIM) to reduce memory accesses in CE. The extended design is called CE+. In spite of the AIM, CE+ stresses or saturates the on-chip interconnect and the off-chip memory network bandwidth because of its reliance on eager write-invalidation-based coherence. We explore whether detecting conflicts is potentially better suited to cache coherence based on release consistency and self-invalidation, rather than M(O)ESI-based coherence. We realize this insight in a novel architecture design called ARC. Our evaluation shows that CE+ improves the run-time performance and energy usage over CE for several applications across different core counts, but can suffer performance penalties from network saturation. ARC generally outperforms CE, and is competitive with CE+ on average while stressing the on-chip interconnect and off-chip memory network much less, showing that coherence based on release consistency and self-invalidation is well suited to detecting region conflicts. Index Terms—Conflict exceptions; data races; memory consistency models; region serializability I. INTRODUCTION To maximize performance, shared-memory systems allow compilers and architectures to perform optimizations assuming that threads communicate only at synchronization operations. Consequently, a program that is not well synchronized permits data races and has complex or undefined semantics that lead to incorrect behavior [1]. Providing well-defined semantics for all programs, without restricting optimizations or impeding performance, is a long-standing challenge [1], [29], [30], [48]. A promising approach is for a shared-memory system to detect conflicts between executing synchronization-free regions (SFRs) [6], [31]. An SFR conflict generates a consistency exception because the conflict corresponds to a data race that may violate optimization assumptions. By detecting conflicts at regions demarcated only by synchronization operations, the system provides a memory consistency model called SFRSx, in which program execution either appears to be a serialization of SFRs or terminates with a consistency exception, indicating a true data race [6], [31]. Supporting SFRSx demands precise conflict detection, requiring per-core tracking of byte-granular memory accesses. Prior work called Conflict Exceptions (CE) provides SFRSx with architecture support [31]. However, CE frequently exchanges metadata between cores and memory on private cache misses and evictions, leading to high bandwidth requirements that can potentially saturate the on-chip interconnect and off-chip memory, thereby degrading performance (Sections II and VI). This work proposes designs for architecture support for efficient region conflict detection. The first contribution is to try to address CE’s high performance overheads resulting from frequent memory accesses by adding a metadata cache adjacent to the shared last-level cache (LLC), called the access information memory (AIM). We call the resulting architecture CE+. Despite the optimization, CE+ still incurs significant costs in order to support precise conflict detection eagerly at every memory access. CE and CE+ detect conflicts eagerly by relying on an eager write-invalidation-based coherence protocol like M(O)ESI [49] to exchange access metadata between concurrently executing cores, which can saturate the on-chip interconnect and the off-chip memory network. The second contribution of this work is a novel architecture design called ARC that, like CE and CE+, provides SFRSx, but substantially differs in how it supports conflict detection and cache coherence. The key insight of ARC is that, while M(O)ESI coherence suits a data-race-free (DRF) assumption, region conflict detection is potentially better served by a “lazier” approach to coherence. ARC provides unified mechanisms for cache coherence and conflict detection by extending release consistency and self-invalidation mechanisms, which prior work has employed, but for coherence alone and assuming DRF [11], [19], [26], [27] (Section VII). Like ARC, Transactional Coherence and Consistency (TCC) unifies coherence and conflict detection, but relies on broadcast and serialized transaction commit [20], incurring high costs (Section VI-C). This material is based upon work supported by the National Science Foundation under Grants XPS-1629126, CCF-1421612, CAREER-1253793, and CSR-1218695. 1In contrast, DRFx detects conflicts between bounded-length regions, requiring restrictions on compiler optimizations [34], while other proposals detect every data race and incur high costs [16], [53]. CE+ and ARC place different demands on the existing microarchitecture. Whereas CE+ extends CE and hence relies on support for M(O)ESI cache coherence, ARC does not depend on a M(O)ESI implementation or directory and does not require a core-to-core interconnect. Like CE, both CE+ and ARC add per-byte access information to caches and require a backing memory store for evicted access information. An access information cache in CE+ and ARC avoids memory lookups at an acceptable area and power overhead, and ARC adds distributed consistency controller logic (Section V-C). CE+ and ARC target widely available CMPs with moderate core counts (≤32). CMPs with large core counts (>32) are out of scope because access information storage requirements scale with core count. We evaluate CE+ and ARC and compare them with CE. CE+ consumes less memory bandwidth, improving performance and energy compared to CE for a number of applications and core counts, which shows the potential for the AIM. However, CE+ can still suffer from performance penalties from network saturation. ARC also outperforms CE, and has comparable run-time performance and energy usage on average with CE+. In general, ARC’s lazy approach to coherence means it has less on-chip and off-chip bandwidth requirements, benefiting applications with large regions and working set sizes (Section VI-B). We also compare ARC with TCC [20] adapted to provide SFRSx. ARC avoids communication and serialization issues faced by CE, CE+, and TCC. Furthermore, we show that ARC achieves well-defined semantics at modest overheads compared to current shared-memory systems that provide weak memory consistency (Section VI-D). Our results show that CE+ and ARC advance the state of the art in architecture support for well-defined memory semantics. II. BACKGROUND: SFRSx and Conflict Exceptions As the prior section motivated, a system can provide well-defined execution semantics by detecting conflicts between synchronization-free regions (SFRs). An SFR is a sequence of non-synchronization instructions executed by a single thread, delimited by synchronization operations (e.g., lock acquire, lock release, thread fork, and thread join), as shown in Figure 1. If a system detects conflicts between unbounded SFRs and generates a consistency exception upon a detected conflict, it provides the SFRSx memory consistency model, which ensures either SFR serializability or a consistency exception indicating a data race [6], [31]. SFRSx thus ensures strong, well-defined semantics for all programs. Providing SFRSx is different from detecting all data races. Figure 1 shows an execution with data races on variables x and y. Under SFRSx, a system does not need to generate a consistency exception at the read of x for this observed execution because the SFRs accessing x do not overlap. In contrast, the SFRs accessing y overlap; SFRSx throws a consistency exception if it cannot ensure rd y’s SFR serializes before or after wr y’s SFR. Note that an execution may or may not generate an exception at each of Thread 2’s (racy) accesses under SFRSx; if the execution does not generate an exception, it must preserve SFR serializability. Detecting region conflicts: Prior architectures for both precise and imprecise conflict detection of unbounded regions have the key limitation that they consume a large amount of on-chip bandwidth or increase traffic to main memory [20], [31], [35]. The rest of this section summarizes work that uses precise conflict detection, which is required to provide SFRSx. Conflict Exceptions (CE) provides SFRSx, adding per-byte access metadata to each cache line to track reads and writes [31]. CE adds local and remote access bits for each byte in a private line to keep track of bytes read or written by an ongoing region in local or remote cores, respectively. CE piggybacks on MOESI coherence [49] to exchange metadata indicated by the per-byte access bits, and detects SFR conflicts by comparing local and remote access bits. For any conflict detected, CE generates a consistency exception to terminate the execution. A core sends its local access bits in an end-of-region (endR) message to other cores at each region boundary. A core receiving an endR message clears the corresponding remote bits from its private caches and sends back an acknowledgment. CE also handles evictions from private caches to the LLC by storing local access bits of evicted lines in a per-process structure called the global table. Communication at region boundaries and frequent access of the in-memory global table often lead to saturating the on-chip interconnect and off-chip memory bandwidth (shown empirically in Section VI-B). III. OPTIMIZING CONFLICT EXCEPTIONS WITH THE AIM Our first contribution in this paper is an optimized design of CE, called CE+. CE+ includes a dedicated cache structure, called the access information memory (AIM), for storing access information that are evicted from private caches. CE+ aims to reduce expensive accesses to the in-memory global table in CE by backing metadata for lines evicted from private caches in the AIM. Conceptually, the AIM sits adjacent to the LLC, and the global table is accessed only on AIM misses. An AIM entry corresponds to an LLC line, storing for each byte one read bit for each of the C cores, and the current writer core if there is one (a lg C-bit writer core and 1 bit indicating there is a writer). As shown in Figure 2 (ignore the portion shaded gray), with B-byte cache lines, an AIM ![Fig. 1. Under SFRSx, an execution generates a consistency exception for a data race that may violate SFR serializability.](image-url) entry is \((C + 1 + \log C) \times B\) bits, e.g., 96 bytes for 8 cores and 64-byte lines. As a centralized structure, contention by cores accessing the AIM threatens scalability at high core counts. Address-banking the AIM reduces contention, and mitigates the threat to scalability; we assume 8 AIM banks. An ideal AIM with one entry per LLC line is a perfect cache of the LLC’s access information. However, an ideal AIM is impractical. With 8 cores, 64-byte lines, and a 16 MB, 16-way LLC, the AIM would be around 24 MB—a large (59.2 mm²), slow (9 ns access time), and power-hungry (394 mW leakage per bank) on-chip structure in 32-nm technology (data from CACTI 7 [24]). Instead, CE⁺ uses a realistic AIM design that, for 8 cores, has 32K entries and 4-way associativity. This 3 MB AIM has implementable area (9.2 mm²), latency (3.3 ns access time), and leakage (52.7 mW per bank). In a 6-core Intel Core i7-3970X at 3.5 GHz, this AIM would add 0.39% overhead to the 2,362 mm² package area, 12-cycle access latency, and negligible leakage of 0.3% of the thermal design power (TDP). The AIM’s hardware design scales well across a moderate range (up to 32) of CMP core counts. At 16 cores, a 32K-entry AIM is 5.3 MB, has 4.6 ns access time, 13.5 mm² area, and 89 mW leakage per bank. At 32 cores, a 64K-entry AIM is of 19 MB size, with 7.8 ns access time, 53.8 mm² area, and 310 mW leakage per bank. We choose these AIM sizes to balance AIM misses and hard-ware cost. Since AIM size, latency, and leakage scale with core count, the AIM is unlikely to scale to large (>32) core counts. At 64 cores, a 128K-entry AIM would be 71 MB with 13.8 ns access time, 162.8 mm² area, and 1108 mW leakage per bank. In the Intel Core i7-3970X, such an AIM would incur around 6% area overhead, 49-cycle access latency, and substantial leakage of 6.0% of the TDP; such a structure is too costly to implement. Using fewer (e.g., 64K) entries decreases area and power costs, but increases the AIM’s miss rate, increasing a computation’s latency and total energy consumption. IV. Design Overview of ARC The second contribution of this work is to present a new architecture called ARC (Architecture for Region Consistency) that provides the SFRSx memory model. Unlike CE and CE⁺, which rely on M(O)ESI coherence, ARC exploits synergy between (1) conflict detection and (2) coherence based on release consistency and self-invalidation. In release consistency, a core’s private cache waits to propagate writes until a synchronization release operation [19], [27]. In self-invalidation, a core invalidates lines cached privately that may be out of date at synchronization acquire operations [11], [26]. In contrast with M(O)ESI coherence, release consistency and self-invalidation do not eagerly invalidate lines when they are written by another core. Such “lazy” invalidation allows an ARC core to execute regions mostly in isolation, performing coherence and conflict detection only at region boundaries (synchronization operations) and on evictions to the shared cache. ARC’s novel approach for committing writes and version- and value-validating reads minimizes communication required for detecting conflicts and avoids most self-invalidation costs. ARC requires minimal compiler support to identify synchronization operations, which serve as region boundaries. ARC does not restrict compiler optimizations within regions. The rest of this section overviews ARC’s design. Section V describes an architecture design that implements ARC. State: ARC supports byte-granular tracking to provide precise conflict detection required by SFRSx. Cores’ private caches and the LLC track access information for each byte in a cache line that represents whether the byte has been read and/or written by a core’s ongoing SFR. The LLC needs to maintain access information for lines evicted from a core’s private cache to the LLC. To help validate reads and limit self-invalidation, each LLC line maintains a version, which is a monotonically increasing integer that represents the latest write-back to the line in the LLC, and is incremented each time the line is written back to the LLC. Actions at reads and writes: When a core reads (writes) a byte of memory, it updates the read (write) bit for the accessed byte in its private cache. If the byte was previously written and is being read, the core does not update the read bit. Aside from fetching a line from the LLC on a private cache miss, a read or write does not trigger any communication with the LLC or other cores. A. Actions at Region Boundaries When a core’s region ends, it provides both coherence and SFR serializability using a region commit protocol. Unlike other mechanisms (e.g., TCC [20]; Section VI-C), the region commit protocol for a core can proceed in parallel with other cores performing the protocol or executing regions, because the core and the LLC do not communicate with other cores’ caches during the protocol. The protocol ensures atomicity by setting a core’s write access bits in the LLC for the duration of the protocol. The protocol consists of the following three operations in order: 1. Pre-commit: For each dirty line, the core sends its privately cached write bits and version to the LLC. The LLC checks for any conflicting access bit for the same byte in the LLC, which indicates a conflict. If the version matches the LLC line’s 2http://ark.intel.com/products/70845 version, then the core’s cached line is up to date and is not invalidated during post-commit (described below). Otherwise, the LLC sends a “must invalidate line” message to the core. (2) Read validation: The core must validate that the values it read are consistent with the LLC’s current values. Instead of sending each line’s data, which would be expensive, ARC sends the line’s version to the LLC. The LLC compares the line’s version with its version of the line. A successful version validation of the line implies the core read valid values during the region. On a version mismatch, which indicates a potential conflict, the LLC responds with its version and data values for the line. The core value-validates the line precisely by comparing the LLC’s values with its cached values (looking only at bytes with read bits set) and generates a consistency exception if they do not match; otherwise the core updates its version and values. Even on a version match, if any write bit is set in the LLC line for a remote core, the LLC responds with its line’s write bits. The core ensures the absence of a write–read conflict by checking that no locally read byte has its write bit set in the LLC. To ensure validation against a consistent LLC snapshot, read validation must repeat until it validates every line without any version mismatches. Starvation is possible if a core repeatedly retries read validation, but ARC is livelock and deadlock free: a version mismatch implies that some other core made progress by writing to the LLC. A misbehaving thread of a process $P$ can starve other threads of process $P$ only. $P$’s misbehaving thread cannot starve another process, $Q$ (which would be a denial-of-service attack), because $P$ and $Q$ access distinct lines (assuming no interprocess sharing). A long-running region should occasionally validate its reads, to detect a data race that causes a region to get stuck in an infinite loop that is infeasible in any SFR-serializable execution (a so-called “zombie” region [21]). Read validation ensures SFR serializability, even considering the ABA problem, because it validates values against a consistent LLC snapshot: it ensures that a core’s read values match values in the LLC (and no conflicting write bits are set) at the point in time when read validation (re)started. If ABA happens, read validation will detect a version mismatch but not a value mismatch, update the private line version, and retry. A core $c$ can skip validating any line that was not updated in the LLC by any other core during $c$’s region. ARC maintains a per-core write signature at the LLC that encodes which lines have been updated in the LLC during the core’s current region by any other core. The core receives the write signature from the LLC at the start of read validation. It re-fetches the write signature from the LLC at the end of read validation to ensure that it has not changed; if it has, read validation restarts. (3) Post-commit: The core writes back dirty bytes to the LLC (these write-backs can be deferred; Section V-C1) and clears its private cache’s access information. The LLC clears all of its access information for the core. By keeping write bits set from pre- to post-commit, ARC ensures that commit and validation appear to happen together atomically. In a naïve design, the core must then invalidate all lines in its private cache. However, a core can avoid invalidating most lines by leveraging ARC’s existing mechanisms. A core can avoid invalidating most lines accessed by the ending region because read validation has already ensured that read-only lines are up-to-date with the LLC, and pre-commit has ensured that written-to lines are up-to-date with the LLC, except for lines for which the LLC sends a “must invalidate line” message to the core. A core can avoid invalidating a line not accessed by the ending region if ARC can ensure that other cores have not written to the line in the LLC during the region’s execution, identified using the same per-core write signature that read validation uses. B. Evictions and WAR Upgrades If a core evicts a line that has access information, the core’s private cache writes back the access information to the LLC, along with the line data if the line is dirty. The LLC uses the access information to detect conflicts with other cores that have already evicted the same line, or that later validate reads, commit writes, or evict lines to the LLC. Note that when a core evicts a private line, the core and LLC do not communicate with other cores. When a core writes a byte that it read earlier in its ongoing region—called a write-after-read (WAR) upgrade—the private cache cannot simply overwrite the byte because that would make it impossible to value-validate the prior read. ARC instead immediately sends a WAR-upgraded line’s read bits and version to the LLC. The LLC read-validates the line and detects future read–write conflicts for the line, similar to how private cache line evictions are handled. As the next section explains, the ARC architecture avoids communicating with the LLC for WAR-upgraded L1 lines, by preserving an unmodified copy of the line in the L2. V. Architecture Design of ARC The ARC architecture is a collection of modifications to a baseline multi-core processor. The cores share the last-level cache (LLC), and each core has a cache hierarchy with private, write-back L1 and L2 caches. Unlike CE and CE*, ARC need not assume that the LLC is inclusive of a core’s private L1 and L2 caches. ARC’s baseline processor has no support for cache coherence; it has no directory, and each cache line has only a valid bit and a dirty bit. Figure 3 shows the components (shaded blocks) that ARC adds to the baseline processor: (1) access information storage and management and (2) distributed per-core consistency controllers (CCs), each discussed next. A. Private Access Information Management Every L1 and L2 cache line maintains access information and a 32-bit version (as shown in Figure 4) that ARC uses to detect conflicts. ARC associates a read bit and a write bit per byte with each line in the core’s L1 and L2 caches. An epoch is a number that identifies a core’s ongoing SFR, and the AIM stores each core’s epoch in a dedicated current epoch register. The AIM increments a core’s epoch when the core finishes an SFR. On a fill, the AIM compares the entry’s epochs to each core’s epoch. A differing epoch indicates access information from a completed SFR, allowing the AIM to lazily clear information for that line for that core. The epoch list avoids the need to explicitly track which lines in memory have access information. With $C$ cores, $B$-byte cache lines, 4-byte versions, and $E$-bit epochs, each line evicted from the AIM occupies $4 + \left\lceil \left( C + 1 + \log C \right) \times B + E \times C \times \frac{4}{8} \right\rceil$ bytes of memory. ARC uses page overlays [45] to consume memory only for lines that have AIM-evicted access information. Page overlays extend the TLB and page tables to provide efficient, compact access to a page’s per-line metadata. C. Consistency Controllers (CCs) Section IV described the steps of ARC’s region commit protocol. Here we focus on the implementation of the consistency controllers (CCs), which contain buffering and control logic for exchanging access bits, versions, and values. Each core has a core-side CC and an AIM-side CC. The CCs themselves are unlikely to limit scalability because different cores’ CCs share no state or control logic at the core or AIM side. Contention at the AIM by different cores’ AIM-side CCs is unlikely to limit scalability because the AIM is banked and accesses to it are infrequent relative to region execution. 1) Region Commit Protocol: Figure 5 shows the high-level states and transitions of a core’s core-side and AIM-side CCs. During region execution, the core-side and AIM-side CCs exchange access information to handle WAR upgrades and evictions. The core-side and AIM-side CCs also coordinate during the other protocol phases. The figure omits transitions for consistency exceptions to avoid clutter; ARC may deliver consistency exceptions for a conflict detected during execution, pre-commit, or read validation. A core’s core-side CC initiates the commit protocol. A core’s protocol phases can overlap with other cores performing the protocol or executing regions; the protocol ensures atomicity by setting a core’s write bits in the AIM during pre-commit and not clearing them until post-commit. Different cores’ CCs never communicate directly; instead, a core’s CC checks consistency using only data in the LLC and metadata in the core’s cache and in the AIM. Pre-commit: The core sequentially sends write bits from its dirty cached lines to its AIM-side CC. The core’s AIM-side CC compares the core’s write bits to all other cores’ access bits from the AIM to check for a conflict. Upon a conflict, the core’s CC delivers an exception to the core. If there is no conflict, the AIM-side CC updates the AIM entry’s access bits to match the buffered ones received from the core-side CC. Read validation: The core-side CC performs read validation, sending a sequence of validation request messages to its AIM-side CC, one for each line the core read during the ending region (lines 1–7 in Figure 6). Each message contains the line address and version from the core’s private cache. For each message it receives, the AIM-side CC compares with the version from the AIM. If all versions match and no write bits are set for a remote core for any offset in the shared line, read validation completes successfully. If a read line’s versions match, but a write bit was set by a remote core, the core’s AIM-side CC logic responds to the core-side CC with write bits so the core-side CC can check for write–read conflicts. In case of a conflict, the core raises a consistency exception. If a line’s version mismatches, another core wrote to the line and there may be a conflict. On a version mismatch, the core’s AIM-side CC logic sends the core-side CC the line’s updated version. Lines 8–15 of Figure 6 show how the core-side CC handles responses from the AIM-side CC. The core re-fetches that line from the LLC into a dedicated line comparison buffer in the core. The core-side CC compares the (read-only) line in the private cache to the line in the comparison buffer. If the lines differ for a byte that has its read bit set, then the validating core read inconsistent data and raises a consistency exception. If they match, then the core may have seen consistent data in its region; the core sets its revalidate bit because the core must revalidate prior lines. After a core finishes validating all remaining lines, it revalidates by starting again from the beginning, streaming versions for comparison with the AIM by the AIM-side CC. After the core completes validation (or revalidation) without version mismatches, it unsets the revalidate bit and continues. Post-commit: The core-side CC clears L1 and L2 lines’ access bits, and the AIM-side CC clears the core’s access bits in the AIM and finally increments the core’s epoch. Instead of writing back dirty L1 and L2 bytes to the LLC, ARC optimizes post-commit by deferring write-back of each dirty line until another core needs it. Deferring write-backs adds $1 + \lg C$ bits (for a system with $C$ cores) per LLC line to identify whether its state is deferred and which last-writer core has the deferred data. Writing back the write bits is required to allow the CC to detect conflicts and commit writes atomically. If another core requests a deferred line from the LLC, the LLC first fetches the latest values from the last-writer core before responding to the request. The core-side CC invalidates L1 and L2 lines that cannot be kept valid based on pre-commit, read validation, or the per-core write signature. As an optimization, private cache lines have a special cond-invalid (CINV) state in addition to valid (V) and invalid (I) states; CINV indicates that the line’s data is valid only if the LLC’s version is unchanged. During post-commit, a core optimistically changes each untouched line’s state to CINV, instead of I. When a core accesses a line in the CINV state for the first time in a subsequent region, the core’s CC sends its copy of the line’s version to the AIM-side CC, which compares the version with the AIM’s version and replies to the core indicating whether the versions match. If the versions match, the core-side CC upgrades the line in the L2 and L1 caches to V. Otherwise, the access is handled as a miss. 2) Other CC Responsibilities: Besides performing the region commit protocol, the core- and AIM-side CCs have the following responsibilities. Per-core write signatures: The AIM-side CCs encode the write signature for each core as a Bloom filter. Whenever a core writes back to the LLC, its AIM-side CC updates every other core’s write signature to include the updated line. When a core starts read validation, the core’s AIM-side CC sends the core-side CC its write signature and clears the AIM-side CC’s copy of the core’s signature. The core-side CC uses its received copy of the write signature during read validation to identify lines that do not need to be validated, and during post-commit to identify lines that do not need to be invalidated. The AIM-side CC uses a 112-bit Bloom filter for each core, which along with control data fits into one 16-byte network flit (Section VI-A). Handling evictions and WAR upgrades: When an L2 evicts a line with access information, the core’s AIM-side CC performs pre-commit and read validation on the line. Likewise, if an L2 sends access information for a WAR-upgraded L2 line (Section V-A), the AIM-side CC performs read validation on the line. The AIM-side CC checks for conflicts using the access information in the AIM, and checks that the L2 line’s contents match the version or, if the versions do not match, the values in the LLC. Finally, the AIM-side CC logic updates the line’s access information in the AIM. When a core’s L2 fetches an LLC line with access bits in the AIM for that core, the LLC sends the core the line’s data values and the access bits for the core, which the core uses to populate its L1 and L2 access information. Delivering consistency exceptions: When a core’s CC detects a conflict, it generates a consistency exception, by raising a non-maskable interrupt signal for the core that detected the conflict. The core receives the interrupt and then runs operating system code to terminate the execution. D. Other Issues Implementing synchronization: CE and CE+ support lock-based synchronization using M(O)ESI. By forgoing M(O)ESI coherence, ARC needs a special mechanism to implement lock acquire and release. ARC uses a mechanism similar to distributed queue locks used by DeNovoND [52]; alternatively, callbacks could efficiently implement locks with self-invalidation [39]. We assume compiler support to identify synchronization as region boundaries (e.g., endR instruction in CE [31]). ARC can handle legacy code by intercepting pthread calls to identify them as synchronization, but a library approach alone does not support other synchronization strategies (e.g., atomics and assembly). Handling context switches and translation shootdowns: To avoid false conflicts, thread migration in ARC is allowed only at synchronization points. Furthermore, the region commit protocol needs to complete before a thread is migrated, to avoid missing conflicts. A core can context-switch from one process’s thread to another process’s thread at any time (assuming no interprocess memory sharing), which preserves the operating system’s process scheduling behavior. A core can only switch from one thread to another thread from the same process at a synchronization point to avoid missing conflicts between the threads. If a swapped-in thread evicts a privately cached line accessed by a swapped-out thread, the eviction may lead to a consistency exception. The operating system can use the page table to identify the process that originally set the metadata bits, and deliver the exception to that process. Page re-mapping (e.g., changing page permissions) can flush access bits with the TLB shootdown to avoid future false conflicts on the re-mapped page, or re-mapping could end the current region. VI. Evaluation This section evaluates run-time performance and energy usage of CE+ and ARC, compared primarily with CE [31]. We also compare ARC with TCC’s mechanisms [20] and with a contemporary shared-memory system that provides weak execution guarantees in the presence of data races. <table> <thead> <tr> <th>Processor</th> <th>4-, 8-, 16-, or 32-core chip at 1.6 GHz. Each non-memory-access instruction takes 1 cycle.</th> </tr> </thead> <tbody> <tr> <td>L1 cache</td> <td>8-way 32 KB per-core private cache, 64 B line size, 1-cycle hit latency</td> </tr> <tr> <td>L2 cache</td> <td>8-way 256 KB per-core private cache, 64 B line size, 10-cycle hit latency</td> </tr> <tr> <td>Remote core cache access</td> <td>15-cycle one-way cost (for CE and CE+)</td> </tr> <tr> <td>LLC</td> <td>4-way metadata cache with 8 banks</td> </tr> <tr> <td>CE</td> <td>4-way metadata cache with 8 banks</td> </tr> <tr> <td>CE+</td> <td>4-way metadata cache with 8 banks</td> </tr> <tr> <td>ARC</td> <td>4-way metadata cache with 8 banks</td> </tr> <tr> <td>Memory</td> <td>120-cycle latency</td> </tr> <tr> <td>Bandwidth</td> <td>NoC: 100 GB/s, 16-byte flits; Memory: 48 GB/s</td> </tr> </tbody> </table> TABLE I ARCHITECTURAL PARAMETERS USED FOR SIMULATION. A. Simulation Methodology We implemented CE, CE+, and ARC in simulation. The simulations are Java applications that implement each architecture; they use Pin [32] to generate a serialized event trace that the simulators consume. For each program execution and core count, all simulators run the same serialized event trace from Pin to eliminate differences due to run-to-run nondeterminism. We send simulation output data to McPAT [28] to compute energy usage. The CE and CE+ simulators extend the directory-based MESI cache coherence protocol implemented in the RADISH simulator, provided by its authors [14]. We have made our Pin frontend and CE, CE+, and ARC simulator backends publicly available.4 Table I shows simulation parameters for 4–32 cores. CE and CE+ use an LLC that is inclusive, to support MESI with the directory embedded in the LLC (see Figure 8.6 in [49]). ARC’s LLC is not inclusive (Section V). The simulators treat pthread calls as lock operations. ARC treats atomic accesses (i.e., those with the x86 LOCK prefix) as special, handling them like locks (Section V-D) that do not delineate regions. Modeling execution costs: We use an idealized core model with an IPC of one for non-memory instructions. Table I shows instruction cycle costs. Our simulators report the maximum cycles for any core; as in prior work [5], [14], the simulators do not model synchronization wait time. We model wait-free, write-back caches with idealized write buffers. Our simulation ignores the effects of context switching and page remapping. We compute energy usage using the McPAT modeling tool [28]. 4https://github.com/PlaSSicity/ce-arc-simulator-ipdps19 McPAT takes as input architectural specifications and dynamic statistics corresponding to an execution (e.g., cache misses, coherence events, and simulated execution cycles), and reports static and dynamic power usage of specified architectures. Since our simulators do not collect core-level statistics such as ALU and branch instructions, our methodology uses McPAT to compute power consumption for the cache and memory subsystem only, including the on-chip interconnect and LLC-to-memory communication, and computes corresponding energy usage. To model the costs of ARC’s operations at region boundaries, when cores send messages without synchronous responses during pre-commit and read validation, we compute the cycle cost of messages based on the total message size and bandwidth between a core and the LLC. The ARC simulator models the full cost of version mismatches during read validation, including repeated validation attempts. However, since the simulators process a serialized event trace, the number of read validation attempts for a region cannot exceed two in our evaluation. We simulate an interconnect with 16-byte flits and with bandwidth characteristics as shown in Table I. A control message is 8 bytes (tag plus type); a MESI data message in the CE simulators is 64 bytes (i.e., a cache line). For ARC write-backs, we model idealized write-buffer coalescing that sends only dirty bytes. When sending versions to the LLC during read validation, each flit holds four lines of data; we assume that the core’s AIM-side CC and the LLC are ported to handle a message’s four validation requests. Our simulators compute the bandwidth required by the different techniques by tracking the amount of data that is transmitted on the on-chip interconnect and the off-chip memory network during application execution. To keep the complexity of the simulators manageable, the simulators do not model queuing in the on-chip and off-chip networks. We approximate the effects of queuing by scaling execution cycles by the proportion with which the assumed on-chip and off-chip bandwidths (Table I) are exceeded. This simple methodology does not model network stalls due to bursts of traffic saturating network-internal buffers. For example, ARC may suffer periodic bursts in bandwidth consumption and network stalls while executing the region commit protocol, as can CE and CE+ when broadcasting the endR message at region boundaries. **Benchmarks:** Our experiments execute the PARSEC benchmarks [5], version 3.0-beta-20150206, with simmedium inputs. We omit freqmine since it uses OpenMP as the parallelization model, and facesim since our Pintool fails to finish executing it. We measure execution cost for the parallel “region of interest” (ROI) only; vips lacks an ROI annotation so we use its entire execution as the ROI. Table II shows how many threads each benchmark spawns, parameterized by \( n \), which is PARSEC’s minimum threads parameter. The simulators set \( n \) equal to the number of cores (4, 8, 16, or 32) in the simulated architecture. The last three columns show the average number of memory accesses performed per SFR. The simulators map threads to cores using modulo arithmetic. **Consistency exceptions:** When the CE, CE+, and ARC simulators detect conditions for a consistency exception, they log the exception and continue execution. In our experiments, CE/CE+ and ARC detect conflicts in canneal and streamcluster, and CE/CE+ also detects conflicts in vips. These differences arise because CE/CE+ detects all conflicts eagerly, while ARC detects some conflicts lazily. The simulators can report both locations involved in a conflict, by maintaining the last-access source location corresponding to each read and write bit of access information. Using Google’s ThreadSanitizer [44] and by implementing “collision analysis” [18], we have confirmed that each detected conflict corresponds to a true data race. **B. Performance and Energy Comparison** Figure 7 shows our main results. The configurations (CE-4, CE+, ARC-4, etc.) show the run-time performance and energy usage for CE, CE+, and ARC on 4, 8, 16, and 32 cores, respectively, normalized to CE-4. Figure 7(a) shows executed cycles as reported by the simulators, broken down into different components. CE and CE+ are divided into cycles attributed to MESI coherence and other execution. Coherence cycles are those spent when the directory forwards requests to remote cores and for core-to-core communication. For ARC, cycles are divided into cycles for pre-commit, read validation, and post-commit, and cycles for region execution. Figure 7(b) compares the energy usage of CE, CE+, and ARC. Each bar shows the breakdown of total energy consumed into energy due to static and dynamic power dissipation, as computed by McPAT. The static and dynamic energy components for CE+ and ARC include contributions from using the AIM cache, and the dynamic component for ARC includes the contribution from Bloom filter accesses. Figures 7(a) and 7(b) show that CE+ improves run-time performance and energy usage over CE across core counts for the majority of programs. For 4, 8, and 16 cores, CE+ improves execution cycles and energy usage over CE by 8.8% and 7.9%, 8.9% and 7.9%, and 7.1% and 6.2%, respectively. CE backs up access bits in an in-memory table when a line that was accessed in an ongoing region is evicted from a private cache. CE accesses memory even on an LLC hit, for a line that was previously evicted from a private cache or the LLC during the ongoing region. The on-chip AIM in CE⁺ helps avoid some of CE’s expensive memory accesses. Overall, these results show promise in using a metadata cache to reduce off-chip memory traffic, thus improving performance and saving energy. On 32 cores, CE⁺ fares the same as or better than CE for all programs except canneal. canneal has large regions and working sets, which leads CE to saturate off-chip memory bandwidth (120 GB/s compared to 48 GB/s assumed in our simulation; Table I) moving evicted access metadata to and from memory [31]. Private cache line recalls during LLC evictions in MESI-based CE⁺ cause CE⁺ to incur many AIM evictions, and AIM lines are large, especially at 32 cores. Many expensive AIM evictions for canneal lead CE⁺ to saturate both the on-chip interconnect (153 GB/s, greater than the assumed on-chip network bandwidth of 100 GB/s) and the off-chip memory network (180 GB/s). As discussed in Section VI-A, we model network saturation and queuing by scaling execution cycles. Limited bandwidth increases CE⁺’s execution cycles by 5.8X, and hence CE⁺ performs poorly compared with CE. As a result, on 32 cores, CE⁺ is 0.3% slower and uses 0.8% more energy on average than CE. ARC outperforms CE for several programs and performs similarly for the others. ARC’s performance benefit over CE arises from performing fewer memory accesses for metadata. The results also show that AIM and Bloom filter dynamic energy costs are insignificant compared to energy costs of the cache and memory subsystem, justifying their inclusion. ARC performs nearly identically with CE⁺ for 4–32 cores, and especially outperforms both CE and CE⁺ for canneal at 8, 16, and 32 cores. ARC’s approach to coherence and its use of a non-inclusive LLC stress the network much less for canneal (90 GB/s for on-chip interconnect and 75 GB/s for off-chip memory bandwidth at 32 cores) in moving around metadata compared to CE and CE⁺ that build on MERSI’s eager invalidation-based protocol. In general, ARC uses several times less network bandwidth than CE and CE⁺ for several programs. ARC uses less energy than CE and CE⁺ for canneal because ARC runs the application faster (i.e., fewer cycles). For fluidanimate, ARC’s execution time is slightly higher than CE⁺’s (and sometimes CE’s, depending on the core count). fluidanimate performs more synchronization operations with increasing numbers of threads, and has progressively smaller regions with more threads (Table II). For ARC, more frequent region boundaries 1) cause more frequent invocations of pre- and post-commit, read validation, and self-invalidation operations, which add execution cycles, and 2) incur latency from cache misses due to frequent self-invalidation. **Sensitivity to AIM size:** We evaluated the impact of the AIM size with an idealized AIM that has one entry for each LLC line and a smaller-sized AIM (detailed results omitted for space). For simplicity, we evaluated AIM size sensitivity only for ARC. On 32 cores, the idealized AIM improves execution time and energy usage by ~10% compared to the default 64K-entry AIM (Table I). At a lower hardware cost, a 32K-entry AIM increases execution cycles and energy usage by <10% on average, compared to the 64K-entry AIM. These results show that the AIM remains effective at reasonable sizes. **Hardware costs:** CE and CE⁺ differ primarily in the use of an AIM. We estimate the opportunity cost of the AIM in CE⁺ at 32 cores by translating the space overhead of the AIM into additional LLC size in CE. This configuration of CE, CE-Ext, has a a larger LLC (84 MB) than the default at 32 cores (64 MB, see Table I) to account for the AIM overhead. The evaluation methodology is the same as for Figure 7. In our experiments, the improvement in hit rates due to a larger LLC in CE-Ext is negligible compared to the overall execution, and hence the larger LLC in CE-Ext has negligible impact (<1% on the average) on the overall performance and energy consumption (results omitted for space). While CE and CE* build on MESI, ARC avoids a cache coherence protocol. Assuming an idealized sparse directory that distributes state by chaining pointers in cores’ private lines, in a system with 8 cores, a 16MB, 16-way LLC with 64-byte lines, a directory would require 1MB of storage for tags and pointers. An AIM for the same system requires ~3MB of storage, adding modest hardware overhead while potentially limiting the need for frequent memory accesses for most applications and different core counts. Furthermore, ARC’s use of release consistency and self-validation mechanisms provides more design flexibility by not requiring an inclusive LLC and support for core-to-core communication. Other than the space overhead, detailed results from McPAT show that static power dissipation from the AIM contributes to overall power insignificantly. C. Comparison with TCC Transactional Coherence and Consistency (TCC) is a hardware transactional memory (Section VII) design that, like ARC, provides coherence and conflict detection at region boundaries without a MOESI-like protocol [20]. As in CE, CE*, and ARC, all code in TCC executes in regions (i.e., transactions). TCC broadcasts transaction write sets to detect conflicts at region boundaries. Speculation allows TCC to efficiently track accesses for regions of memory larger than a byte (e.g., cache line), but coarse granularity leads to false conflicts. To compare TCC with ARC empirically, we evaluate a modified version of ARC called ARC-TCC that uses TCC’s mechanisms. For ARC-TCC, we compute execution cycles excluding read validation and pre- and post-commit, but including the following: each region broadcasts its write set, and a region that overflows its private caches cannot execute in parallel with other overflowed or committing regions [20]. Table III shows the run-time overhead incurred by ARC-TCC compared with ARC. The amount of serialization incurred by ARC-TCC or TCC during an ongoing transaction depends on how often the per-core write buffer in the TCC architecture overflows. To estimate the impact of the write buffer on ARC-TCC’s performance, we evaluated the performance of ARC-TCC for write buffer sizes of 8–64K per core. For each core count and write buffer size, Table III reports the ratio of ARC-TCC to ARC’s execution time. The table shows that TCC’s mechanisms continue to incur high run-time overhead even with large per-core write buffers because many regions overflow the private caches, leading to much serialization. This comparison shows that, for the same context (i.e., precise conflict checking of SFRs), ARC’s design provides substantial performance benefits over TCC. Follow-up work optimizes TCC using a directory [10] and by parallelizing commits [37]. However, TCC is fundamentally limited by its use of *bounded* write buffers, which overflow, leading to serialized commit. D. Evaluating the Cost of Strong Memory Consistency Modern shared-memory systems provide *undefined semantics* for programs with data races (Section I). *Memory models* for shared-memory programming languages such as C/C++ and Java are mostly unable to provide useful guarantees to executions with data races [1], [7], [33]. Though hardware memory models (e.g., [3], [46], [50]) are generally stronger than language models, they apply only to the compiled code, thereby failing to provide *end-to-end* guarantees with respect to the source program. Here, we estimate the cost of providing SFRSx over such a weak memory model (WMM). Note that WMM is not directly comparable to this paper’s approaches, given the different guarantees provided. Our WMM configuration models the same directory-based MESI protocol (Figure 8.6 in [49]) used by CE (Table I). Figure 8 shows how CE, CE*, and ARC compare to WMM for 32 cores, using the same methodology as for Figure 7, normalized to WMM-32. Figure 8(a) shows that on average, CE, CE*, and ARC are slower than WMM by 26.7%, 27.1%, and 12.5%, respectively, for 32 cores. Figures 7(a) and 8(a) show that CE, CE+, and ARC scale well for most programs, except for canneal and fluidanimate. The energy usage is proportional to the running time of each configuration, and is also influenced by the frequency of accesses to the AIM cache and the Bloom filter structures for relevant configurations other than WMM. Figure 8(b) shows that on average CE, CE+, and ARC use 41.4%, 42.6%, and 27.8% more energy than WMM. CE, CE+, and ARC have particularly high overhead for canneal and fluidanimate. As discussed in Section VI-B, canneal has large regions and working sets, which either saturate the on-chip interconnect and the off-chip network for CE and CE+, or require more memory accesses for ARC to transmit access information compared to WMM, significantly slowing execution and increasing energy usage. fluidanimate has short regions that fail to amortize the cost incurred by the operations at region boundaries for configurations other than WMM. E. Summary CE provides well-defined semantics for all executions, but incurs a substantial cost compared to WMM to maintain precise byte-granular access information and to check for region conflicts. CE+, the first contribution in this work, can potentially improve performance and reduce energy usage for a number of applications across core counts, but with increased complexity. ARC, the second contribution, is a completely different design, which performs well compared with CE and CE+. Our work shows that detecting region conflicts using coherence based on release consistency and self-invalidation can be competitive with techniques that either rely on eager invalidation-based coherence (e.g., CE) or are impeded by fundamental limitations on region size (e.g., TCC). Furthermore, we show that ARC can provide strong consistency guarantees with performance that is competitive with the performance of current shared-memory systems (WMM). VII. Related Work This section compares CE+ and ARC with related work not already covered in Section II. Valor provides SFRSx in software alone but slows executions by almost 2X on average [6]. IFRit likewise adds high overhead to detect conflicts between extended SFRs [15]. Ouyang et al. enforce SFR serializability using a speculation-based approach that relies on extra cores to avoid substantial overhead [36]. SOFRITAS enforces software-based conflict serializability through fine-grained two-phase locking [13]. Hardware transactional memory (HTM) also detects region conflicts [21], [23]. However, HTM systems can use imprecise conflict detection by leveraging speculative execution, while SFRSx requires precise conflict detection; and HTM must keep original copies of speculatively written data, in case of misspeculation. Like CE and CE+, most HTMs piggyback conflict detection on the cache coherence protocol [35], [54]. Unbounded HTM designs incur run-time cost and design complexity because data that leave the cache cannot easily be tracked by the coherence protocol (e.g., [4]). BulkSC resembles TCC but broadcasts imprecise write signatures [8], [9]. To ensure progress, BulkSC dynamically subdivides regions, precluding its application to SFRSx’s unbounded regions. Software transactional memory (STM) can handle unbounded regions without hardware modifications, but requires heavyweight instrumentation and synchronization that slows (single-thread) execution by 2X or more [12], [21], [22], [41]. Some STM systems use version or value validation of reads (e.g., [12], [21]). ARC’s adaptation of validation to the hardware cache hierarchy and its combination of version and value validation are both novel. DeNovoSync, SARC, and VIPS use self-invalidation to reduce complexity compared to M(O)ESI [26], [38], [51]. TSO-CC and Racer provide TSO using self-invalidation and without tracking sharers [17], [40]. DeNovo and DeNovoND use self-invalidation for coherence, assuming DRF [11], [52]. Jimborean et al. use compiler analysis that assumes DRF to safely extend SFRs, reducing self-invalidation costs [25]. Distributed shared memory systems use release consistency to reduce traffic and latency [27]. Unlike prior work, ARC does not assume DRF. Instead, ARC exploits synergy between mechanisms for coherence and conflict detection, detecting data races that manifest as SFR conflicts to provide SFRSx. Prior work supports memory models based on serializability of bounded regions that are in general shorter than full SFRs [2], [34], [43], [47]. Sequential consistency (SC) is essentially serializability of single-instruction regions [29], [30], [48]. To provide end-to-end guarantees, all of these approaches require corresponding restrictions on compiler optimizations. DRFx detects conflicts among bounded regions by maintaining region buffers and Bloom filter signatures of memory accesses [34], [47]. DRFx broadcasts the Bloom filter signatures and occasionally the region buffers across cores, which is unscaleable for large regions (e.g., SFRs) and with increasing core counts. Researchers have introduced custom hardware to accelerate data race detection, extending cache coherence and adding on-chip vector clock metadata [14], [42], [53]. VIII. Conclusion CE+ and ARC are architecture designs that ensure strong, well-defined semantics for all executions, including executions with data races. Compared to the state-of-the-art technique CE [31], we show that an AIM cache in CE+ seems promising to reduce the cost of providing SFRSx. The key to ARC’s efficiency is its novel design that builds on and leverages release consistency and self-invalidation mechanisms. ARC outperforms CE and TCC [20], and performs competitively with CE+ in terms of run time and energy usage. These results suggest that CE+ and especially ARC advance the state of the art significantly in parallel architecture support for region conflict exceptions. References
{"Source-Url": "http://web.cse.ohio-state.edu/~bond.213/arc-ipdps-2019.pdf", "len_cl100k_base": 12031, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 44306, "total-output-tokens": 13880, "length": "2e13", "weborganizer": {"__label__adult": 0.0005106925964355469, "__label__art_design": 0.0007262229919433594, "__label__crime_law": 0.0005311965942382812, "__label__education_jobs": 0.000789642333984375, "__label__entertainment": 0.0001494884490966797, "__label__fashion_beauty": 0.00027680397033691406, "__label__finance_business": 0.0005245208740234375, "__label__food_dining": 0.00044035911560058594, "__label__games": 0.0012807846069335938, "__label__hardware": 0.01324462890625, "__label__health": 0.00067901611328125, "__label__history": 0.0006694793701171875, "__label__home_hobbies": 0.00018107891082763672, "__label__industrial": 0.001262664794921875, "__label__literature": 0.000308990478515625, "__label__politics": 0.0004901885986328125, "__label__religion": 0.0007848739624023438, "__label__science_tech": 0.341552734375, "__label__social_life": 7.95125961303711e-05, "__label__software": 0.0099945068359375, "__label__software_dev": 0.623046875, "__label__sports_fitness": 0.0005102157592773438, "__label__transportation": 0.0014190673828125, "__label__travel": 0.00032329559326171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58834, 0.02005]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58834, 0.19578]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58834, 0.89861]], "google_gemma-3-12b-it_contains_pii": [[0, 5544, false], [5544, 11209, null], [11209, 16650, null], [16650, 22822, null], [22822, 26593, null], [26593, 30547, null], [30547, 36433, null], [36433, 41798, null], [41798, 45718, null], [45718, 49991, null], [49991, 55902, null], [55902, 58834, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5544, true], [5544, 11209, null], [11209, 16650, null], [16650, 22822, null], [22822, 26593, null], [26593, 30547, null], [30547, 36433, null], [36433, 41798, null], [41798, 45718, null], [45718, 49991, null], [49991, 55902, null], [55902, 58834, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58834, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58834, null]], "pdf_page_numbers": [[0, 5544, 1], [5544, 11209, 2], [11209, 16650, 3], [16650, 22822, 4], [22822, 26593, 5], [26593, 30547, 6], [30547, 36433, 7], [36433, 41798, 8], [41798, 45718, 9], [45718, 49991, 10], [49991, 55902, 11], [55902, 58834, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58834, 0.06509]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
4fca790f371831c7e73fd536427fb276263f6234
This is the unspecified version of the paper. This version of the publication may differ from the final published version. Permanent repository link: http://openaccess.city.ac.uk/521/ Link to published version: Copyright and reuse: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. Rephrasing Rules for Off-The-Shelf SQL Database Servers Ilir Gashi, Peter Popov Centre for Software Reliability, City University, Northampton Square, London, EC1V 0HB I.Gashi@city.ac.uk, ptp@csr.city.ac.uk Abstract We have reported previously [1] results of a study with a sample of bug reports from four off-the-shelf SQL servers. We checked whether these bugs caused failures in more than one server. We found that very few bugs caused failures in two servers and none caused failures in more than two. This would suggest a fault-tolerant server built with diverse off-the-shelf servers would be a prudent choice for improving failure detection. To study other aspects of fault tolerance, namely failure diagnosis and state recovery, we have studied the “data diversity” mechanism and we defined a number of SQL rephrasing rules. These rules transform a client sent statement to an additional logically equivalent statement, leading to more results being returned to an adjudicator. These rules therefore help to increase the probability of a correct response being returned to a client and maintain a correct state in the database. 1. Introduction Fault tolerance is frequently the only viable approach of obtaining the required system dependability from systems built out of “off-the-shelf” (OTS) products [2]. There are various methods in which this fault tolerance can be achieved ranging from simple error detection and recovery add-ons (e.g. wrappers [3]) to diverse redundancy replication using diverse versions of the components. These design solutions are well known. Questions remain, however, about the dependability gains and implementation difficulties for a specific system. We have studied some of these issues in SQL database servers, a very complex category of off-the-shelf products. We have previously reported [1] results from a study with a sample of bug reports from four off-the-shelf SQL servers so as to assess the possible advantages of software fault tolerance - in the form of modular redundancy with diversity - in complex off-the-shelf software. We found that very few bugs cause failures in two servers and none cause failures in more than two, which would indicate that significant dependability improvements can be expected from the deployment of a fault-tolerant server built out of diverse off-the-shelf servers in comparison with individual servers or the non-diverse replicated configurations. Although we found that using multiple diverse SQL servers can dramatically improve error detection rates it does not make them 100%, e.g. our study [1] found four bugs causing identical non-self-evident failures in two servers. Thus there is room for improving failure detection further. Many of the cases, in which a failure was detected did not allow for immediate diagnosis of the failed server. Fault tolerance requires also diagnosing the faulty server and maintaining data consistency among the databases in addition to failure detection. To improve the situation, we studied the mechanism called “data diversity” by Ammann and Knight [4] (who studied it in a different context). The simplest example of the idea in [4] refers to computation of a continuous function of a continuous parameter. The values of the function computed for two close values of the parameter are also close to each other. Thus, failures in the form of dramatic jumps of the function on close values of the parameter can not only be detected but also corrected by computing a “pseudo correct” value. This is done by trying slightly different values of the parameter until a value of the function is calculated which is close to the one before the failure. This was found [4] to be an effective way of detecting as well as masking failures, i.e. delivering fault-tolerance. Data diversity, thus, can help with failure detection and state recovery, and thus complement fault-tolerance solutions which employ diverse modular redundancy, as well as helping achieve a certain degree of fault tolerance without employing diverse modular redundancy. Data diversity is applicable to SQL servers because of the inherent redundancy that exists in the SQL language: statements can be “rephrased” into different, but logically equivalent [sequences of] statements. While working with the bug reports we found examples where a particular statement causes a failure in a server but a rephrased version of the same statement does not. Examples of such statements often appear in bug reports as “workarounds”. In this paper we provide details of how SQL rephrasing can be employed systematically in a fault-tolerant server and provide examples of useful rephrasing rules. We also report on performance measurements using the TPC-C [5] benchmark client implementation to get some initial estimates of the delays introduced by rephrasing. The paper is structured as follows: in section 2 we give details of the architecture of a fault-tolerant server employing rephrasing. In section 3 we give details of the data diversity study we have conducted for defining SQL rephrasing rules and illustrate how one of these rules has been used as a workaround for two known bugs of two SQL servers. In section 4 we give some empirical results of experiments we have conducted to measure the performance penalty due to rephrasing. In section 5 we discuss some general implications of our results and finally in section 6 some conclusions are presented with possibilities for further work. 2. Architecture of a Fault-Tolerant Server 2.1 General Scheme Data replication is a well-understood subject [6], [18], [7]. The main problem replication protocols deal with is guaranteeing consistency between copies of a database without imposing a strict synchronisation regime between them. A study which compared various replication protocols in terms of their performance and the feasibility of their implementation can be found in [8]. Existing protocols implement efficient solutions for this problem, but depend on running copies of the same (non-diverse) server. These schemes would not tolerate non-self-evident [sequences of] failures that cause incorrect writes to the database or 1 In [1] we classified the failures according to their detectability by a client of the database servers into: Self-Evident failures - engine crash failures, cases in which the server signals an internal failure as an exception (error message) and performance failures; Non-Self-Evident failures: incorrect result failures, without server exceptions within an accepted time delay. that return incorrect results from read statements. For the former, incorrect writes would be propagated to the other replicas and for the latter, incorrect results would be returned to the client. This deficiency can be overcome by building a fault-tolerant server node (“FT-node”) from two or more diverse SQL servers, wrapped together with a “middleware” layer to appear to each client as a single SQL server. An illustration of this architecture with two diverse Off-The-Shelf servers (“O-servers”) is shown in Fig. 1. A brief explanation of the figure follows. Several nodes (computers) are depicted which run client applications (Client node 1, Client node 2 and Client node 3) or server applications (Middleware node, RDBMS 1 node and RDBMS 2 node). The bottom three nodes together form the FT-server. Components may share a node: e.g. Replication Middleware, and the two SQL connectors for dialects 1 and 2 are deployed on the Middleware node. The SQL connectors additionally contain the SQL rephrasing rules. The diagram assumes that the Off-The-Shelf servers (O-servers) run on separate nodes, RDBMS 1 node and RDBMS 2 node. The circles represent the interfaces through which the components interact. Each SQL connector, implements the SQL Connector API interface used by the Replication Middleware component. This, in turn implements the Middleware API interface via which the client applications access the FT-server, either directly or via a driver for the FT-server in a specific run-time environment, e.g. JDBC driver or .NET Provider. Further improvements to this architecture would be to also run diverse replicas of the middleware component. We have described elsewhere [9], [2] in more detail the FT-node architecture. Here we will only elaborate on the parts relevant to the discussion of rephrasing. 2.2 SQL Connectors The O-servers are not fully compatible: they “speak different dialects” of SQL, despite being compliant at various levels with SQL standards. Therefore the FT-server includes a translator between these dialects, defined for a subset of SQL (e.g. “SQL-92 entry level”) plus some more advanced features important for enterprise applications (such as TRIGGERs and STORED PROCEDURES). The translators are depicted as “SQL Dialect Connector’s” in Fig 1. A similar idea (implemented in [10], [11]) is to redefine the grammar of one database server to make it compatible with that of another while keeping the core database engine unchanged. 2.3 Failure Detection, Masking, Recovery The middleware of the FT-server includes extensive functionality for failure detection, masking and state recovery. Self-evident server failures are detected as in a non-diverse server, via server error messages (i.e. via the existing error detection mechanisms inside the servers), and time-outs for crash and performance failures. Diversity gives the additional capability of detecting non-self-evident failures by comparing the outputs\(^2\) of the different O-servers. In a FT-node with 3 or more diverse O-servers, majority voting can be used to choose a result and thus mask the failure to the clients, and identify the failed O-server which may need a recovery action to correct its state. With a 2-diverse FT-node, if the two O-servers give different results, the middleware cannot decide which O-server is in error. \(^2\) An “output” may be the results from a SELECT statement or the number of rows affected for a write (INSERT, UPDATE and DELETE) statement. For INSERT and UPDATE statements a more refined way would be to read back the affected rows and use those for comparison. This is where “data diversity” can help by providing additional results to break the tie (more in the next subsection). State recovery of the database can be obtained in the following ways: - via standard backward error recovery, which will be effective if the failures are due to transient failures (caused by so called “Heisenbugs” [12]). To command backward error recovery, the middleware may use the standard database transaction mechanisms: aborting the failed transaction and replaying its statements may produce a correct execution. With “data diversity” a finer granularity level of recovery is possible using SAVEPOINTS and ROLLBACKS; - additionally, diversity offers ways of recovering from non-transient failures (caused by so called “Bohrbugs” [12]), by essentially copying the database state of a correct server into the failed one (similarly to [13]). Since the formats of the database files differ between the servers, the middleware would need to query the correct server[s] for their database contents and command the failed server to write them into the corresponding records in its database, similar to what is proposed in [14]. would be expensive, perhaps to be completed off-line, but a designer can use multi-level recovery, in which the first step is to correct only those records that have been found erroneous on read statements. 2.4 Data Diversity Extensions Even with just two diverse O-servers, many of the O-server failures may be masked by using “data diversity” (rephrasing an SQL statement into a different, but semantically equivalent one) to solicit “second opinions” from the O-servers and if possible outvote the incorrect response. Data diversity could be implemented via an algorithm in the “Middleware Node” that rephrases statements according to predefined rules. We can define these rules for each type of SQL statement defined by the SQL grammar implemented by the server. These rules therefore may form part of the “SQL Dialect Connectors”. Upon receiving a statement from a client application the middleware can look up a rule from the list of available rules and rephrase the statement. The middleware must allow for new rules to be defined as and when necessary. If the middleware exhausts the list of rules that it can apply to a certain statement but no “correct result” can be established by applying the closed adjudication mechanism then an error message is returned to the client. Data diversity can be used with or without design diversity. Architectural schemes using data diversity are similar to those using design diversity. For instance, Amman and Knight in [4] describe two schemes, which they call “retry block” and “n-copy programming”, which can also be used for SQL servers. The “retry block” is based on backward recovery. A statement is only rephrased if either the server “fail-stops” or its output fails an acceptance test. In “n-copy programming”, a copy of the statement as issued by the client is sent to one of the O-servers and rephrased variant(s) are sent to the others; their results are voted to mask failures. Data diversity allows for a finer-granularity of state recovery, which is facilitated by the implementation of “SAVEPOINT” and “ROLLBACK” within transactions. The procedure (written in pseudocode), for a statement within a transaction, is given at the end of this subsection. A performance optimization could be to perform adjudication at an intermediate step of the WHILE loop execution rather than at the end (e.g. for a “majority voting” adjudication, if there are five rules for a particular statement then could check after the execution of the first three rephrased versions of the statement whether results returned by each of them are identical; if yes then majority result is already obtained and therefore no need for the last two rephrased versions of the statement to be executed). The SAVEPOINT and ROLLBACK approach is the correct way of ensuring the “isolation” property of an ACID transaction. Otherwise, if we “ABORTed” the transaction and started a new one to perform the rephrased version of the statement, a concurrent transaction may have updated rows in the target table. This would lead to different results being returned by the O-server for the rephrased statement even though the behavior is not faulty. 3. SQL Rephrasing Rules As explained in section 2, the support for data diversity can be implemented in the middleware in the form of rephrasing rules. The initial step is defining the rules that are to be implemented. The rules can be defined by studying in depth the SQL language itself to identify the parts of the language which are synonymous and therefore enable the definition of logically equivalent rephrasing rules. We took a different more --- 3 Depending on the setup used a correct result could be either the majority result or one that passes an acceptance test. 4 This is under the assumption that the ACID property of the transaction is failure-free. direct approach to defining these rules: we studied the known bugs reported for 4 open-source servers, namely Interbase 6.0, Firebird 1.0\(^5\), PostgreSQL 7.0 and PostgreSQL 7.2 (abbreviated IB 6.0, PG 7.0, FB 1.0 and PG 7.2 respectively). However our intention was not to simply define workaround rules which are highly bug specific, but instead to define generic rephrasing rules, which can be used in a broader setting. As a result we found that some of the generic rules that we defined could be applied to multiple bugs in our study. We provide examples next. ### 3.1 Generic Rules The “generic rules” are rephrasing rules, which can be applied to a range of ‘similar’ statements, be it DML (data manipulation language: SELECT, INSERT, UPDATE and DELETE) or DDL (data definition language e.g. CREATE TABLE etc.) statements. We have defined a total of 14 generic rephrasing rules. Full details of these rules are in [15]. We will provide details of Rule 8 and how it proved to be a useful workaround for two different bugs reported for two different servers. **Rule 8:** *An SQL VIEW can be rephrased as an SQL STORED PROCEDURE or SQL TEMPORARY TABLE.* This rule proved to be a useful workaround for FB 1.0 Bug 488343 [16]. To observe the failure the bug report details the following setup: ```sql CREATE TABLE CUSTOMERS (ID INT, NAME, VARCHAR(10) ); CREATE TABLE INVOICES (ID INT, CUST_ID INT, CODE VARCHAR(10), QUANTITY INT); INSERT INTO CUSTOMERS VALUES (1, 'ME'); INSERT INTO INVOICES VALUES (1, 1, 'INV.1', 5); INSERT INTO INVOICES VALUES (2, 1, 'INV.2', 10); INSERT INTO INVOICES VALUES (3, 1, 'INV.3', 15); INSERT INTO INVOICES VALUES (4, 1, 'INV.4', 20); CREATE VIEW V_CUSTOMERS AS SELECT DISTINCT ID, NAME FROM CUSTOMERS; ``` The failure can be observed by issuing the following statement: ```sql SELECT SUM(INV.QUANTITY) FROM INVOICES INV INNER JOIN V_CUSTOMERS CUST ON INV.CUST_ID = CUST.ID; ``` The expected result is 50 not 20. If we use a **STORED PROCEDURE** instead of the **VIEW** then the correct results is returned: ```sql SET TERM !!; CREATE PROCEDURE V_CUSTOMERS RETURNS (ID INT, NAME VARCHAR(10)) AS BEGIN FOR SELECT DISTINCT ID, NAME FROM CUSTOMERS INTO :ID, :NAME DO SUSPEND; END; END!! ``` Issuing the same **SELECT** statement as before we obtain the expected result (50): ```sql SELECT SUM(INV.QUANTITY) FROM INVOICES INV INNER JOIN V_CUSTOMERS CUST ON INV.CUST_ID = CUST.ID; ``` The same rule was a useful workaround for another bug, this time the PG 7.0 bug 23 [17]. To observe the failure the bug report details the following setup: ```sql CREATE TABLE L (PID INT NOT NULL, SEARCH BOOL, SERVICE BOOL); INSERT INTO L VALUES (1,'T','F'); INSERT INTO L VALUES (1,'T','F'); INSERT INTO L VALUES (1,'T','F'); INSERT INTO L VALUES (1,'T','F'); INSERT INTO L VALUES (1,'T','F'); INSERT INTO L VALUES (1,'F','F'); INSERT INTO L VALUES (1,'F','F'); INSERT INTO L VALUES (2,'F','F'); INSERT INTO L VALUES (3,'F','F'); INSERT INTO L VALUES (3,'T','F'); ``` The following **VIEW**s are then defined (notice the **GROUP BY** clause): ```sql CREATE VIEW CURRENT AS SELECT PID, COUNT(PID), SEARCH, SERVICE FROM L GROUP BY PID, SEARCH, SERVICE; CREATE VIEW CURRENT2 AS SELECT PID, COUNT(PID), SEARCH, SERVICE FROM L GROUP BY PID, SEARCH, SERVICE; ``` By issuing the following **SELECT** statement incorrect results are obtained (this is due to the **GROUP BY** clause used in the **VIEWs** and the **COUNT** used on a column from a **VIEW**): ```sql SELECT CURRENT.PID, CURRENT.COUNT AS SEARCHTRUE, CURRENT2.COUNT AS SEARCHFALSE FROM CURRENT,CURRENT2 WHERE CURRENT.PID =CURRENT2.PID AND CURRENT.SEARCH='T' AND CURRENT2.SEARCH='F' AND CURRENT2.SERVICE='F'; ``` The expected results are: <table> <thead> <tr> <th>pid</th> <th>searchtrue</th> <th>searchfalse</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> <td>10</td> </tr> <tr> <td>3</td> <td>1</td> <td>1</td> </tr> </tbody> </table> \(^6\) The syntax used is specific for Firebird. By using **TEMPORARY TABLEs** instead of **VIEWs** the correct result is obtained: ```sql SELECT PID, COUNT(PID), SEARCH, SERVICE INTO TEMP CURRENT FROM L GROUP BY PID, SEARCH, SERVICE; SELECT PID, COUNT(PID), SEARCH, SERVICE INTO TEMP CURRENT2 FROM L GROUP BY PID, SEARCH, SERVICE; SELECT CURRENT.PID, CURRENT.COUNT AS SEARCHTRUE, CURRENT2.COUNT AS SEARCHFALSE FROM CURRENT, CURRENT2 WHERE CURRENT.PID=CURRENT2.PID AND CURRENT.SEARCH='T' AND CURRENT2.SEARCH='F' AND CURRENT.SERVICE='F' AND CURRENT2.SERVICE='F'; ``` <table> <thead> <tr> <th>pid</th> <th>searchtrue</th> <th>searchfalse</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>5</td> <td>2</td> </tr> <tr> <td>3</td> <td>1</td> <td>1</td> </tr> </tbody> </table> We used **TEMPORARY TABLEs** in PG 7.0 and not **STORED PROCEDUREs** since PG 7.0 does not support functions (procedures) that return multiple rows. Details of the other generic rephrasing rules and how they can be used as workarounds for other reported bugs are given here [15]. We looked at how many of the generic rules can be applied to the bugs reported for the open-source servers in our bugs study. The results are shown in Table 1. The leftmost three columns of the table show the results for the non-self-evident failures caused by read (i.e. SELECT) statements. Clearly, a number of these are also classified as a “user error”, i.e. the user issues an incorrect statement, which the server incorrectly executes without raising an exception. For example IB 6.0 incorrectly executes a statement such as SELECT X FROM A, B even though the column X is defined in both tables A and B, which can lead to ambiguous results. PG 7.0 / PG 7.2, correctly, raise an exception. If we take away the “user error” bugs then we can see that in all the server pairs the generic rules can be used as workarounds for at least 80% of the non-self-evident failures caused by read statements. The right-most 4 columns of the table are for the bugs that cause state-changing failures, which have been further subdivided into bugs in DDL and write statements. We can see that generic rules can be used as workarounds for at least 60% of failures caused by the state-changing statements. ### 3.2 Specific Rules The generic rephrasing rules that we have defined do not provide workarounds for all the failures caused by the bugs collected in our study. For these failures specific workaround rules need to be defined. For example recursive BEFORE UPDATE TRIGGERS can return error messages in FB 1.0/IB 6.0 which means the table for which the trigger is defined becomes unusable (FB 1.0 bug 625899 [16]). A generic rule could not be defined for this bug. A specific workaround (and a generic recovery procedure) upon encountering this error message would be to: - disable the trigger in FB 1.0 / IB 6.0 - read the log of the other server to check the sequence of the write statements that have been issued as a result of the trigger - send this sequence of statements explicitly to the FB 1.0 / IB 6.0 server The workaround above would work in a diverse server-type configuration if the other server[s] works correctly (the other server[s] in our study do not contain this bug) while without design diversity a fault, clearly, cannot be dealt with this way. We have found that a large number of bugs, if server diversity is not employed, would require very specific rules to be defined to workaround the failures that they cause. In many cases these rules require substantial new implementation in the form of “wrapping” of the results returned to the client (or for write statements before they are stored in the database) or re-implementing parts of the functionality of the database that are found to be faulty and no workaround exists in SQL. Although possible such an approach is clearly limited because the newly developed code can itself be faulty which may diminish the gains in reliability that can be obtained from its use. This reiterates that design diversity is desirable. ### 4. Performance Implications of Rephrasing To measure the performance implications of rephrasing, we conducted a number of experiments based The factors which degrade performance when rephrasing is employed are: 1. Delays enforced by the middleware for comparison of results 2. Delays from using the following mechanisms within transactions: - Transaction SAVEPOINTs - Transaction ROLLBACKs - Execution of SELECT statements after WRITE statements (INSERT, UPDATE, DELETE) - Rephrasing The additional delay introduced by the use of rephrasing is delay 2. We have performed an experimental study to estimate delay 2. Delay 1 would exist also in a diverse setup with or without rephrasing. Studies that have reported measures of other delays which are not specific to rephrasing (such as enforcing 1-copy serialisability) can be found in [18], [6]. There are other factors that can influence the degradation of performance that we have not measured in our experimental setup (e.g. rephrasing delays when more than one rephrasing rule is used etc.). The experiments that we have conducted aim to provide an initial estimate of the delays due to rephrasing. A more thorough performance evaluation should also take into account concurrent execution of transactions. As was also noted by one of the anonymous reviewers, for some concurrency control mechanisms, the increase in transaction execution times due to the use of rephrasing, the probability of conflicts due to concurrency may also increase which may further degrade performance. The experimental setup consisted of three computers. All three computers ran on Microsoft’s Windows 2000 operating system, they had 384 MB RAM, and Intel Pentium 4 1.5GHz processors. One machine hosted the client implementation of the TPC-C benchmark. The other two machines hosted the servers (PostgreSQL 8.0 and Firebird 1.5). We used later releases of the servers than the ones used in our bugs study since these earlier releases do not support SAVEPOINTs and ROLLBACKs within transactions. We have not used any commercial servers in our experiments since the license agreements are very restrictive with regard to publishing performance data. We ran experiments on both diverse and non-diverse setups. In the diverse experiments we always wait for the slowest server response before we can start the next transaction. Therefore the diverse setups here are always slower (other configurations are possible and we have discussed some of these in [9]). Figure 2 illustrates the sequence of executions within a transaction for the different non-diverse setups. The grey boxes represent the fault tolerance mechanism used whereas the dotted lines represent the added delay from the use of the respective mechanism. Setup a) is the baseline, against which we will measure the added delays. Setups b), c), and d) measure the delays of using the fault tolerance mechanisms when no failures are observed (i.e. the cost of being cautious). Setups e) and f), measure the cost of re-execution of a statement. These experiments measure delays for a number of situations: - re-execution of an unchanged statement as a possible protection against transient failures (caused by the so called “Heisenbugs”) - re-execution of a logically equivalent rephrased statement in case the first one has failed self-evidently (i.e. a crash or other exceptional failures) - re-execution of a logically equivalent rephrased statement to get additional results for comparison on the middleware to increase the likelihood of failure detection for non-self-evident failures In our experiments we did not use rephrased statements. Instead, the same statement was executed twice. This is a simplification due to the absence of a proper implementation of rephrasing. In the absence of any other data, we wanted to get an initial estimate of the delays that the various fault tolerance mechanisms will produce with the database servers. The diverse setups have a similar structure. The only difference is that in diverse setups we only use 1 SAVEPOINT (at the beginning of the transaction) rather than before each write statement and therefore we may also have only one ROLLBACK (at the end of transaction). For setups e) and f), this means that we first execute every statement once then we ROLLBACK to the beginning and execute all the statement again. So the difference between the diverse and non-diverse setups is a different level of granularity of using SAVEPOINTs/ROLLBACKs. --- 9 b) detection of erroneous writes; c) SAVEPOINT are used before write statements for finer grained recovery; d) both SAVEPOINTs are used and the modified rows are read back (combination of b) and c)); 10 e) optimistic (on writes) rephrasing: each statement is executed twice; to ensure that the state of the database remains unchanged during the second execution of the write statement we use SAVEPOINTs and ROLLBACKs; f) pessimistic rephrasing: same as e) but the written rows are also read to protect against erroneous writes. Fig. 2. A transaction execution sequence in the experimental setups. The shaded boxes represent the fault tolerant mechanism used and the dotted lines represent the additional delays from their use. The second executions of the statements are proxies for rephrased versions of statements. The full results of these experiments are given in Table 2. The first column explains the setup under which the experiment was run. The following 4 columns spell out which fault tolerance mechanisms were used (if the cell is blank then the respective mechanism was not used). The following 3 columns show the average execution time of a transaction, and the last 3 columns show the added delay (in percentages) proportional to the baseline of each setup. The first six rows contain the results for each of the setups we explained earlier (and illustrated in Figure 2). The last two rows are structurally the same as setups (e) and (f) respectively. However in these experiments we have tried to simulated the effect of a simple learning rule: if after 1000 executions a statement has been found to be correct then we stop rephrasing (in our simulation it means we stop executing the statement twice for both setups and additionally stop executing the SELECT statement that read the modifications of the write statements for setup (h)). The delays seem to be higher proportionally in PostgreSQL than in Firebird. This is because the execution time of COMMITs is smaller in Firebird for the experiments with larger number of SELECT statements. The number of write statements to be COMMITed always remains the same in all experiments (even in the ones with 2 executions of statements, since the first execution of a write statement is always ROLLBACKed). Comparing the setups a) with e) we can see that even though in setup e) every statement is being executed twice the average execution times of the transactions are not simply twice the execution time of transactions in setup a). This is explained by the fact that the number of transactions remains the same (i.e. we still have the same number of COMMITs) and also the data may be stored already in the RAM which reduces the execu- Table 2 Performance effects of the various fault-tolerance schemes. Each experiment is run with loads of 10,000 transactions <table> <thead> <tr> <th>Setup description (with reference to Figure 3)</th> <th>SAVEPOINTS</th> <th>ROLLBACKs</th> <th>2 executions for each statement SELECT after WRITE statement</th> <th>Average Transaction Execution time (milliseconds)</th> <th>Delays proportional to the baseline (%)</th> </tr> </thead> <tbody> <tr> <td>Baseline (a)</td> <td></td> <td></td> <td></td> <td>228, 306, 343</td> <td>PG 8.0: 28.3, FB 1.5: 16.3, Diverse PG 8.0 &amp; FB 1.5: 26.5</td> </tr> <tr> <td>Detection of erroneous writes (b)</td> <td></td> <td>√</td> <td></td> <td>292, 356, 434</td> <td>PG 8.0: 5.3, FB 1.5: 0.4, Diverse PG 8.0 &amp; FB 1.5: 1.8</td> </tr> <tr> <td>Finer granularity of recovery (c)</td> <td>√</td> <td></td> <td></td> <td>240, 308, 350</td> <td>PG 8.0: 5.3, FB 1.5: 0.4, Diverse PG 8.0 &amp; FB 1.5: 1.8</td> </tr> <tr> <td>Combination of b and c (d)</td> <td>√</td> <td>√</td> <td></td> <td>305, 364, 433</td> <td>PG 8.0: 33.9, FB 1.5: 18.6, Diverse PG 8.0 &amp; FB 1.5: 26.0</td> </tr> <tr> <td>Optimistic (on writes) Rephrasing (e)</td> <td>√</td> <td>√</td> <td>√</td> <td>353, 450, 489</td> <td>PG 8.0: 54.9, FB 1.5: 46.9, Diverse PG 8.0 &amp; FB 1.5: 42.3</td> </tr> <tr> <td>Pessimistic Rephrasing (f)</td> <td>√</td> <td>√</td> <td>√</td> <td>496, 601, 699</td> <td>PG 8.0: 118, FB 1.5: 96.2, Diverse PG 8.0 &amp; FB 1.5: 105.5</td> </tr> <tr> <td>Learning Optimization (g)</td> <td>√</td> <td>√</td> <td>√</td> <td>256, 325, 402</td> <td>PG 8.0: 12.6, FB 1.5: 6.2, Diverse PG 8.0 &amp; FB 1.5: 17.3</td> </tr> <tr> <td>Learning Optimization (h)</td> <td>√</td> <td>√</td> <td>√</td> <td>278, 341, 524</td> <td>PG 8.0: 22.5, FB 1.5: 11.4, Diverse PG 8.0 &amp; FB 1.5: 52.6</td> </tr> </tbody> </table> tion time of the second statement. The same holds when comparing results of setups b) with f). Since the numbers in Table 2 represent point estimates (i.e. they are single runs of an experiment per setup) we have repeated the experiments for setup a) and f) to measure the non-deterministic variation that may exist between the different runs. We observed a very small difference (less than 1% for 5 out of six of the experiments and less than 3% for all). Hence we can trust with a higher degree of confidence that the observations documented in table 2 represent closely the ‘true’ differences between different setups. 5. Discussion We presented in section 2 the architecture we propose for a fault-tolerant server employing rephrasing. The middleware used would make use of a rephrasing algorithm. Any fault-tolerant solution, which makes use of server diversity would need to have ‘connectors’ developed as part of the middleware to translate a client sent statement to the dialect of the respective server. This is because each server ‘speaks’ its own dialect of SQL. The rephrasing algorithms can also be part of these connectors. A related point is that database servers offer features that are extensions to the SQL standard, and these features may differ between the servers. Therefore for applications which require a richer set of functionality data diversity would be attractive alone as it would for instance allow applications to use the full set of features. A complex statement, which can be directly executed with some servers but not others, may need to be rephrased as a logically equivalent sequence of simpler statements for the latter. For example, the TRUNCATE command is a PostgreSQL specific feature (and is buggy in version 7.0; see bug 20 [17] for details). In its stead the DELETE command can be used to workaround the problem. The DELETE command is also implemented in Firebird and all the other SQL compliant servers. Since most of these rules are transformations of the SQL grammar, they are amenable to formal analysis. Thus, despite the additional implementation, high reliability can be achieved with a combination of formal analysis and testing of the new code. The results presented in section 3 demonstrate that a small number of rephrasing rules can help with server diagnosis and state recovery. We observed that a limited set of generic rephrasing rules that we have defined (14 in total) can be used as workarounds for at least 80% of the non-self-evident failures caused by read statements and at least 60% of failures caused by write or DDL statements in any of the open-source 2-diverse setups in our study. We have also observed that using data diversity without design diversity would lead to a large number of specific rephrasing rules to work-around certain failures. Implementing such rules might require a substantial amount of new implementation, which itself may be faulty, thus, reducing the possible reliability gains that can be obtained from their use. Rephrasing has been proposed as a possibility to detect failures that would otherwise be undetectable in some replication settings. The possible benefits of this approach could be its relatively low cost in comparison with design diversity, and also that it can be used with or without design diversity allowing for various cost-dependability trade-offs. Possible setups include: - In non-diverse redundant replication settings, if high dependability assurances are required, the only option available would be to rephrase all the statements sent to the server. This can lead to high performance penalties. To reduce the performance penalty some form of learning strategy can be applied, e.g. keep track of all the statements that have been rephrased. If the rephrased statement keeps giving the same results as the original statement then confidence is gained that the original statement is giving the correct result and the statement does not have to be rephrased in future occurrences (what we did in setups g) and h) of the TPC-C experiments). The other dimension is to stop sending the client-version of the statement to a server if it always gives an incorrect result. In this case the middleware can flag each occurrence of this statement and use the rephrased version of it without sending the original statement to the server [2]. This reduces the time taken to respond to the client. - In a diverse server configuration a less rephrasing-intensive approach may be used where only the read statements (i.e. SELECTs) that return different results are rephrased (assuming that at least two servers are running in parallel so that a mismatch is detected). The rephrasing is also done for all the write statements (to ensure that the state of the database is not corrupted). Since a smaller set of statements needs to be rephrased the performance is enhanced. The non-self-evident identical failures, however, (we observed 4 of these in the study with known bugs of SQL servers [1]) will not be detected. To further enhance the performance the same learning strategies can be used as in the previous setup. 6. Conclusions We have reported previously [1] on the dependability gains that can potentially be achieved from deploy- ing a fault-tolerant SQL server, which makes use of diverse off-the-shelf SQL servers. From studying bugs reported for four off-the-shelf servers we reported that failure detection rates in 1-out-of-2 configurations was at least 94% and this increased to 100% in configurations which employed more than two servers. However fault tolerance is more than just failure detection. In this paper we reported on the mechanism of data diversity and its application with SQL servers in aiding with failure diagnosis and state recovery. We have defined 14 generic ‘workaround rules’ to be implemented in a ‘rephrasing’ algorithm which when applied to a certain SQL statement will generate logically equivalent statements. We have also argued that since these rules are transformations of the SQL language syntax, they are amenable to formal analysis and dependability gains from employing rephrasing are achievable despite the development of a bespoke new code. We also outlined a possible architecture of a fault tolerant server employing diverse SQL servers and detailed how the middleware used in it can be extended to also handle rephrasing of SQL statements. We also presented some performance measurements from experiments we have run with an implementation of the TPC-C benchmark [5], which gave initial estimates of the likely delays due to employing rephrasing. Further work that is desirable includes: - demonstrating the feasibility of automatic translation of SQL statements from, say ANSI/ISO SQL syntax to the SQL dialect implemented by the deployed SQL servers. We have completed some preliminary work on implementing translators between MSSQL and Oracle dialects for SELECTs, and between Oracle and PostgreSQL dialects for SELECT, INSERT and DELETE statements; - developing the necessary components so that users can try out diversity in their own installations, since the main obstacle now is the lack of popular off-the-shelf “middleware” packages for data replication with diverse SQL servers. This would also include implementing a mechanism of maintaining (adding/removing) rephrasing rules as add-on components in the middleware. ### Acknowledgment This work has been supported in part by the Interdisciplinary Research Collaboration in Dependability (DIRC) project funded by the U.K. Engineering and Physical Sciences Research Council (EPSRC). Authors would like to acknowledge the anonymous reviewers for the thoughtful comments and useful suggestions. ### Bibliography
{"Source-Url": "https://openaccess.city.ac.uk/id/eprint/521/4/EDCC06_SQL_rephrasing.pdf", "len_cl100k_base": 9086, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34192, "total-output-tokens": 10512, "length": "2e13", "weborganizer": {"__label__adult": 0.0002951622009277344, "__label__art_design": 0.0003325939178466797, "__label__crime_law": 0.0003695487976074219, "__label__education_jobs": 0.0010776519775390625, "__label__entertainment": 8.320808410644531e-05, "__label__fashion_beauty": 0.0001461505889892578, "__label__finance_business": 0.00043082237243652344, "__label__food_dining": 0.00034236907958984375, "__label__games": 0.0005249977111816406, "__label__hardware": 0.001590728759765625, "__label__health": 0.0006380081176757812, "__label__history": 0.0002655982971191406, "__label__home_hobbies": 0.00011217594146728516, "__label__industrial": 0.0005559921264648438, "__label__literature": 0.0002722740173339844, "__label__politics": 0.0002187490463256836, "__label__religion": 0.00039768218994140625, "__label__science_tech": 0.1148681640625, "__label__social_life": 8.26716423034668e-05, "__label__software": 0.021240234375, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.00019228458404541016, "__label__transportation": 0.0003986358642578125, "__label__travel": 0.00016558170318603516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44007, 0.02512]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44007, 0.56976]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44007, 0.90927]], "google_gemma-3-12b-it_contains_pii": [[0, 727, false], [727, 4792, null], [4792, 9768, null], [9768, 12052, null], [12052, 15900, null], [15900, 19824, null], [19824, 23907, null], [23907, 28806, null], [28806, 33518, null], [33518, 38770, null], [38770, 44007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 727, true], [727, 4792, null], [4792, 9768, null], [9768, 12052, null], [12052, 15900, null], [15900, 19824, null], [19824, 23907, null], [23907, 28806, null], [28806, 33518, null], [33518, 38770, null], [38770, 44007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44007, null]], "pdf_page_numbers": [[0, 727, 1], [727, 4792, 2], [4792, 9768, 3], [9768, 12052, 4], [12052, 15900, 5], [15900, 19824, 6], [19824, 23907, 7], [23907, 28806, 8], [28806, 33518, 9], [33518, 38770, 10], [38770, 44007, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44007, 0.08955]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
8acdb1fdfd56873a739f95518307d72cd7096b56
Fast Dynamic Arrays Philip Bille¹, Anders Roy Christiansen², Mikko Berggren Ettienne³, and Inge Li Gørtz⁴ ¹ The Technical University of Denmark, Lyngby, Denmark phbi@dtu.dk ² The Technical University of Denmark, Lyngby, Denmark aroy@dtu.dk ³ The Technical University of Denmark, Lyngby, Denmark miet@dtu.dk ⁴ The Technical University of Denmark, Lyngby, Denmark inge@dtu.dk Abstract We present a highly optimized implementation of tiered vectors, a data structure for maintaining a sequence of $n$ elements supporting access in time $O(1)$ and insertion and deletion in time $O(n^\epsilon)$ for $\epsilon > 0$ while using $o(n)$ extra space. We consider several different implementation optimizations in C++ and compare their performance to that of vector and set from the standard library on sequences with up to $10^8$ elements. Our fastest implementation uses much less space than set while providing speedups of $40 \times$ for access operations compared to set and speedups of $10,000 \times$ compared to vector for insertion and deletion operations while being competitive with both data structures for all other operations. 1998 ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems, E.1 Data Structures Keywords and phrases Dynamic Arrays, Tiered Vectors Digital Object Identifier 10.4230/LIPIcs.ESA.2017.16 1 Introduction We present a highly optimized implementation of a data structure solving the dynamic array problem, that is, maintain a sequence of elements subject to the following operations: - `access(i)`: return the $i^{th}$ element in the sequence. - `access(i, m)`: return the $i^{th}$ through $(i + m - 1)^{th}$ elements in the sequence. - `insert(i, x)`: insert element $x$ immediately after the $i^{th}$ element. - `delete(i)`: remove the $i^{th}$ element from the sequence. - `update(i, x)`: exchange the $i^{th}$ element with $x$. This is a fundamental and well studied data structure problem [2, 4, 7, 8, 3, 1, 5, 6] solved by textbook data structures like arrays and binary trees. Many dynamic trees provide all the operations in $O(\log n)$ time including 2-3-4 trees, AVL trees, splay trees, etc. while Dietz [2] gives a data structure that matches the lower bound of $\Omega(\log n / \log \log n)$ showed by Fredman and Saks [4]. In this paper however, we focus on the problem where `access` must run in $O(1)$ time. Goodrich and Kloss present what they call tiered vectors [5] with a time complexity of $O(1)$ for `access` and `update` and $O(n^{1/l})$ for `insert` and `delete` for any constant integer $l \geq 2$, similar to the ideas presented by Frederickson in [3]. The data structure only uses $o(n)$ extra space beyond that required to store the actual elements. At the core, the data structure is a tree with out degree $n^{1/l}$ and constant height $l - 1$. © Philip Bille, Anders Roy Christiansen, Mikko Berggren Ettienne, and Inge Li Gørtz; licensed under Creative Commons License CC-BY Editors: Kirk Pruhs and Christian Sohler; Article No. 16; pp.16:1–16:13 Leibniz International Proceedings in Informatics Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Dagstuhl Publishing, Germany Goodrich and Kloss compare the performance of an implementation with \( l = 2 \) to that of \textit{vector} from the standard library of Java and show that the structure is competitive for access operations while being significantly faster for insertions and deletions. Tiered vectors provide a performance trade-off between standard arrays and balanced binary trees for the dynamic array problem. **Our Contribution.** In this paper, we present what we believe is the first implementation of tiered vectors that supports more than 2 tiers. Our C++ implementation supports \textit{access} and \textit{update} in times that are competitive with the vector class from C++’s standard library while \textit{insert} and \textit{delete} run more than 10,000 \( \times \) faster. It performs \textit{access} and \textit{update} more than 40 \( \times \) faster than the set class from the standard library while \textit{insert} and \textit{delete} is only a few percent slower. Furthermore set uses more than 10 \( \times \) more space than our implementation. All of this when working on large sequences of \( 10^8 \) 32-bit integers. To obtain these results, we significantly decrease the number of memory probes of the original tiered vector. Our best variant requires only half as many memory probes as the original tiered vector for \textit{access} and \textit{update} operations which is critical for the practical performance. Our implementation is cache efficient which makes all operations run fast in practice even on tiered vectors with several tiers. We experimentally compare the different variants of tiered vectors. Besides the comparison to the two commonly used C++ data structures, vector and set, we compare the different variants of tiered vectors to find the best one. We show that the number of tiers have a significant impact on the performance which underlines the importance of tiered vectors supporting more than 2 tiers. Our implementations are parameterized and thus support any number of tiers \( \geq 2 \). It uses a number of tricks like \textit{template recursion} to keep the code rather simple while enabling the compiler to generate highly optimized code. ## 2 Preliminaries The first and \( i^{th} \) element of a sequence \( A \) are denoted \( A[0] \) and \( A[i - 1] \) respectively and the \( i^{th} \) through \( j^{th} \) elements are denoted \( A[i - 1, j - 1] \). Let \( A_1 \cdot A_2 \) denote the concatenation of the sequences \( A_1 \) and \( A_2 \). \(|A|\) denotes the number of elements in the sequence \( A \). A circular shift of a sequence \( A \) by \( x \) is the sequence \( A[|A| - x, |A| - 1] \cdot A[0, |A| - x - 1] \). Define the remainder of division of \( a \) by \( b \) as \( a \mod b = a - qb \) where \( q \) is the largest integer such that \( q \cdot b \leq a \). Define \( A[i, j] \mod w \) to be the elements \( A[i \mod w], A[(i + 1) \mod w], \ldots, A[j \mod w] \), i.e. \( A[4, 7] \mod 5 = A[4], A[0], A[1], A[2] \). Let \( \lfloor x \rfloor \) denote the largest integer smaller than \( x \). ## 3 Tiered Vectors In this section we will describe how the tiered vector data structure from [5] works. **Data Structure.** An \( l \)-tiered vector can be seen as a tree \( T \) with root \( r \), fixed height \( l - 1 \) and out-degree \( w \) for any \( l \geq 2 \). A node \( v \in T \) represents a sequence of elements \( A(v) \) where \( A(r) \) is the sequence represented by the tiered vector. The capacity \( \text{cap}(v) \) of a node \( v \) is \( w^{\text{height}(v) + 1} \). For a node \( v \) with children \( c_1, c_2, \ldots, c_w \), \( A(v) \) is a circular shift of the concatenation of the elements represented by its children, \( A(c_1) \cdot A(c_2) \cdot \ldots \cdot A(c_w) \). The circular shift is determined by an integer \( \text{off}(v) \in [\text{cap}(v)] \) that is explicitly stored for all nodes. Thus the sequence of elements $A(v)$ of an internal node $v$ can be reconstructed by recursively reconstructing the sequence for each of its children, concatenating these and then circular shifting the sequence by $\text{off}(v)$. See Figure 1 for an illustration. A leaf $v$ of $T$ explicitly stores the sequence $A(v)$ in a circular array $\text{elems}(v)$ with size $w$ whereas internal nodes only store their offsets. Call a node $v$ full if $|A(v)| = \text{cap}(v)$ and empty if $|A(v)| = 0$. In order to support fast access, for all nodes $v$ the elements of $A(v)$ are located in consecutive children of $v$ that are all full, except the children containing the first and last element of $A(v)$ which may be only partly full. **Access & Update.** To access an element $A(r)[i]$ at a given index $i$; one traverses a path from the root down to a leaf in the tree. In each node the offset of the node is added to the index to compensate for the cyclic shift, and the traversing is continued in the child corresponding to the newly calculated index. Finally the desired element is returned from the elements array of that leaf. Let $\text{access}(v, i)$ return the element $A(v)[i]$, it can recursively be computed as: - **v is internal:** Compute $i' = (i + \text{off}(v)) \mod \text{cap}(v)$, let $v'$ be the $[i'/w]^{th}$ child of $v$ and return the element $\text{access}(v', i' \mod \text{cap}(v'))$. - **v is leaf:** Compute $i' = (i + \text{off}(v)) \mod w$ and return the element $\text{elems}(v)[i']$. The time complexity is $\Theta(l)$ as we visit all nodes on a root-to-leaf path in $T$. To navigate this path we must follow $l - 1$ child pointers, lookup $l$ offsets, and access the element itself. Therefore this requires $l - 1 + l + 1 = 2l$ memory probes. The update operation is entirely similar to access, except the element found is not returned but substituted with the new element. The running time is therefore $\Theta(l)$ as well. For further use, let $\text{update}(v, i, e)$ be the operation that sets $A(v)[i] = e$ and returns the element that was substituted. **Range Access.** Accessing a range of elements, can obviously be done by using the $\text{access}$-operation multiple times, but this results in redundant traversing of the tree, since consecutive elements of a leaf often – but not always due to circular shifts – corresponds to consecutive elements of $A(r)$. Let $\text{access}(v, i, m)$ report the elements $A(v)[i \ldots i + m - 1]$ in order. The operation can recursively be defined as: Fast Dynamic Arrays **v is internal:** Let \( i_l = (i + \text{off}(v)) \mod \text{cap}(v) \), and let \( i_r = (i_l + m) \mod \text{cap}(v) \). The children of \( v \) that contains the elements to be reported are in the range \([i_l, i_r] \mod \text{cap}(v)\). The running time of an insertion is \( O(c + \text{shift}) \), where \( c \) is the number of offsets that must be updated, and \( \text{shift} \) is the number of elements shifted. In the base case of this recursion, namely when \( c = 0 \), the elements are shifted by actually moving the elements one-by-one in the range \([i_l, i_r] \mod \text{cap}(v)\) **v is leaf:** Report the elements \( \text{elems}(v)[i, i + m - 1] \mod \text{cap}(v) \). The running time of this strategy is \( O(lm) \), but saves a constant factor over the naive solution. **Insert & Delete.** Inserting an element in the end (or beginning) of the array can simply be achieved using the update-operation. Thus the interesting part is fast insertion on an arbitrary position; this is where we utilize the offsets. Consider a node \( v \), the key challenge is to shift a big chunk of elements \( A(v)[i, i + m - 1] \) one index right (or left) to \( A(v)[i + 1, i + m] \) to make room for a new element (without actually moving each element in the range). Look at the range of children \( c_l, c_{l+1}, \ldots, c_r \) that covers the range of elements \( A(v)[i, i + m - 1] \) to be shifted. All elements in \( c_{l+1}, \ldots, c_{r-1} \) must be shifted. These children are guaranteed to be full, so make a circular shift by decrementing each of their offsets by one. Afterwards take the element \( A(c_{i-1})[0] \) and move it to \( A(c_i)[0] \) using the update operation for \( 1 \leq i \leq r \) in \( c_l \) and \( c_r \) only a subrange of the elements might need shifting, which we do recursively. In the base case of this recursion, namely when \( v \) is a leaf, shift the elements by actually moving the elements one-by-one in \( \text{elems}(v) \). Formally we define the \( \text{shift}(v, e, i, m) \) operation that (logically) shifts all elements \( A(v)[i, i + m - 1] \) one place right to \( A[i + 1, i + m] \), sets \( A[i] = e \) and returns the value that was previously on position \( A[i + m] \) as: **v is internal:** Let \( i_l = (i + \text{off}(v)) \mod \text{cap}(v) \), and let \( i_r = (i_l + m) \mod \text{cap}(v) \). The children of \( v \) that must be updated are in the range \([i_l, i_r] \mod \text{cap}(v)\) and \( \text{shift}(c_l, e, i_l, \min(m, \text{cap}(c_l) - i_l)) \). Let \( e_i = \text{shift}(c_i, e, i_l, \min(m, \text{cap}(c_i) - i_l)) \). Let \( e_i = \text{update}(c_i, \text{size}(c_i - 1, e_{i-1}) \) and set \( \text{off}(c_i) = (\text{off}(c_i) - 1) \mod \text{cap}(c) \) for \( c_l = c_{l+1}, \ldots, c_{r-1} \). Finally call \( \text{shift}(c_r, e_{i-1}, 0, i_r) \mod \text{cap}(c_r) \). **v is leaf:** Let \( e_o = \text{elems}(v)[i, i + m - 1] \mod w \). Move the elements \( \text{elems}(v)[i, i + m - 1] \mod w \) to \( \text{elems}(v)[i + 1, i + m] \mod w \), and set \( \text{elems}(v)[i] = e \). Return \( e_o \). An insertion \( \text{insert}(i, e) \) can then be performed as \( \text{shift}(\text{root}, e, i, \text{size}(\text{root}) - i - 1) \). The running time of an insertion is \( T(l) = 2T(l - 1) + w \cdot l \Rightarrow T(l) = O(2^lw) \). A deletion of an element can basically be done as an inverted insertion, thus deletion can be implemented using the shift-operation from before. A delete \( i \) can be performed as \( \text{shift}(r, l, 0, i) \) followed an update of the root’s offset to \( (\text{off}(r) + 1) \mod \text{cap}(r) \). **Space.** There are at most \( O(w^{l-1}) \) nodes in the tree and each takes up constant space, thus the total space of the tree is \( O(w^{l-1}) \). All leaves are either empty or full except the two leaves storing the first and last element of the sequence which might contain less than \( w \) elements. Because the arrays of empty leaves are not allocated the space overhead of the arrays is \( O(w) \). Thus beyond the space required to store the \( n \) elements themselves, tiered vectors have a space overhead of \( O(w^{l-1}) \). To obtain the desired bounds \( w \) is maintained such that \( w = \Theta(n^\epsilon) \) where \( \epsilon = 1/l \) and \( n \) is the number of elements in the tiered vector. This can be achieved by using global rebuilding to gradually increase/decrease the value of \( w \) when elements are inserted/deleted without asymptotically changing the running times. We will not provide the details here. We sum up the original tiered vector data structure in the following theorem: Theorem 1 ([5]). The original \( l \)-tiered vector solves the dynamic array problem for \( l \geq 2 \) using \( \Theta(n^{1-1/l}) \) extra space while supporting access and update in \( \Theta(l) \) time and \( 2l \) memory probes. The operations insert and delete take \( O(2^n n^{1/l}) \) time. 4 Improved Tiered Vectors In this paper, we consider different new variants of the tiered vector. This section considers the theoretical properties of these approaches. In particular we are interested in the number of memory accesses that are required for the different memory layouts, since this turns out to have an effect on the experimental running time. In Section 5.1 we analyze the actual impact in practice through experiments. 4.1 Implicit Tiered Vectors As the degree of all nodes is always fixed to the same value \( w \) (it may be changed for all nodes when the tree is rebuilt due to a full root), it is possible to layout the offsets and elements such that no pointers are necessary to navigate the tree. Simply number all nodes from left-to-right level-by-level starting in the root with number 0. Using this numbering scheme, we can store all offsets of the nodes in a single array and similarly all the elements of the leaves in another array. To access an element, we only have to lookup the offset for each node on the root-to-leaf path which requires \( l - 1 \) memory probes plus the final element lookup, i.e. in total \( l \) which is half as many as the original tiered vector. The downside with this representation is that it must allocate the two arrays in their entirety from the beginning (or when rebuilding). This results in a \( \Theta(n) \) space overhead which is worse than the \( \Theta(n^{1-\epsilon}) \) space overhead from the original tiered vector. Theorem 2. The implicit \( l \)-tiered vector solves the dynamic array problem for \( l \geq 2 \) using \( O(n) \) extra space while supporting access and update in \( O(l) \) time requiring \( l \) memory probes. The operations insert and delete take \( O(2^n n^{1/l}) \) time. 4.2 Lazy Tiered Vectors We now combine the original and the implicit representation, to get both few memory probes and small space overhead. Instead of having one array storing all the elements of the leaves, we store for each leaf a pointer to a location with an array containing the leaf’s elements. The array is lazily allocated in memory when elements are actually inserted into it. The total size of the offset-array and the element pointers in the leaves is \( O(n^{1-\epsilon}) \). At most two leaves are only partially full, therefore the total space is now again reduced to \( O(n^{1-\epsilon}) \). To navigate a root-to-leaf path, we now need to look at \( l - 1 \) offsets, follow a pointer from a leaf to its array and access the element in the array, giving a total of \( l + 1 \) memory accesses. Theorem 3. The lazy \( l \)-tiered vector solves the dynamic array problem for \( l \geq 2 \) using \( \Theta(n^{1-1/l}) \) extra space while supporting access and update in \( \Theta(l) \) time requiring \( l + 1 \) memory probes. The operations insert and delete take \( O(2^n n^{1/l}) \) time. 5 Implementation We have implemented a generic version of the tiered vector data structure such that the number of tiers and the size of each tier can be specified at compile time. To the best of our knowledge, all prior implementations of the tiered vector are limited to the considerably simpler 2-tier version. Most of the performance optimizations applied in the 2-tier implementation do not easily generalize. We have implemented the following variants of tiered vectors: - **Original.** The data structure described in Theorem 1. - **Optimized Original.** As described in Theorem 1 but with the offset of a node $v$ located in the parent of $v$, adjacent in memory to the pointer to $v$. Leaves only consists of an array of elements (since their parent store their offset) and the root’s offset is maintained separately as there is no parent to store it in. - **Implicit.** This is the data structure described in Theorem 2 where the tree is represented implicitly in an array storing the offsets and the elements of the leaves are located in a single array. - **Packed Implicit.** This is the data structure described in Theorem 2 with the following optimization: The offsets stored in the offset array are are packed together and stored in as little space possible. The maximum offset of a node $v$ in the tree is $n^{\text{height}(v)+1}$ and the number of bits needed to store all the offsets is therefore $\sum_{i=0}^{l} n^{1-i \epsilon} \log(n^i) = \log(n) \sum_{i=0}^{l} i e n^{1-i \epsilon} \approx e n^{1-\epsilon} \log(n)$. Thus the $n^{1-\epsilon}$ offsets can be stored in approximately $e n^{1-\epsilon}$ words giving a space reduction of a constant factor $\epsilon$. The smaller memory footprint could lead to better cache performance. - **Lazy.** This is the data structure described in Theorem 3 where the tree is represented implicitly in an array storing the offsets and every leaf store a pointer to an array storing only the elements of that leaf. - **Packed Lazy.** This is the data structure described in Theorem 3 with the following optimization: The offset and the pointer stored in a leaf is packed together and stored at the same memory location. On most modern 64-bit system – including the one we are testing on – a memory pointer is only allowed to address 48 bits. This means we have room to pack a 16 bit offset in the same memory location as the elements pointer, which results in one less memory probe during an access operation. - **Non-Templated.** All other implementations used C++ templating for recursive functions in order to let the compiler do significant code optimizations. This implementation is template free and serves as a baseline to compare the performance gains given by templating. In Section 7 we compare the performance of these implementations. ### 5.1 C++ Templates As almost all other general purpose data structures in C++, we have used templates to support storing different types of data in our tiered vector. This is a well-known technique which we will not describe in detail. However, we have also used template recursion which is basically like a normal recursion except that the recursion parameter must be a compile-time constant. This allows the compiler to unfold the recursion at compile-time eliminating all (recursive) function calls by inlining code, and allowing for better local code optimizations. In our case, we exploit that the height of a tiered vector is constant and therefore can be used for this. To show the rather simple code resulting from this approach (disregarding the template stuff itself), we have included a snippet of the internals of our access operation: template <class T, class Layer> struct helper { static T& get(size_t node, size_t idx) { idx = (idx + get_offset(node)) % Layer::capacity; auto child = get_child(node, idx / Layer::child::capacity); return helper<T, typename Layer::child>::get(child, idx); } }; template <class T, size_t W> struct helper<T, Layer<W, LayerEnd> > { static T& get(size_t node, size_t idx) { idx = (idx + get_offset(node)) % L::capacity; return get_elem(node, idx); } }; We also briefly show how to use the data structure. To specify the desired height of the tree, and the width of the nodes on each tier, we also use templating: Tiered<int, Layer<8, Layer<16, Layer<32>>> tiered; This will define a tiered vector containing integers with three tiers. The height of the underlying tree is therefore 3 where the root has 8 children, each of which has 16 children each of which contains 32 elements. We call this configuration 8-16-32. In this implementation of tiered vectors we have decided to let the number of children on each level be a fixed number as described above. This imposes a maximum on the number of elements that can be inserted. However, in a production ready implementation, it would be simple to make it grow-able by maintaining a single growth factor that should be multiplied on the number of children on each level. This can be combined with the templated solution since the growing is only on the number of children and not the height of the tree (per definition of tiered vectors the height is constant). This will obviously increase the running time for operations when growing/shrinking is required, but will only have minimal impact on all other operations (they will be slightly slower because computations now must take the growth factor into account). In practice one could also, for many uses, simply pick the number of children on each level sufficiently large to ensure the number of elements that will be inserted is less than the maximum capacity. This would result in a memory overhead when the tiered vector is almost empty, but by choosing the right variant of tiered vectors and the right parameters this overhead would in many cases be insignificant. 6 Comparison with C++ STL Data Structures In the following we have compared our best performing tiered vector (see next section) to the vector and the multiset class from the C++ standard library. The vector class directly supports the operations of a dynamic array. The multiset class is implemented as a red-black tree and is therefore interesting to compare with our data structure. Unfortunately, multiset does not directly support the operations of a dynamic array (in particular it has no notion of positions of elements). To simulate an access operation we instead find the successor of an element in the multiset. This requires a root-to-leaf traversal of the red-black tree, just as an access operation in a dynamic array implemented as a red-black tree would. Insertion is simulated as an insertion into the multiset, which again requires the same computations as a dynamic array implemented as a red-black tree would. Besides the random access, range access and insertion tests considered in the previous sections, we have also tested the operations data dependent access, insertion in the end, deletion, and successor. In the data dependent access tests, the next index to lookup depends on the value at the prior lookups. This ensures that the processor cannot successfully pipeline consecutive lookups, but must perform them in sequence. We test insertion in the end, since this is a very common use case. Deletion is performed by deleting elements at random positions. The successor operation returns the successor of an element and is not actually part of the dynamic array problem, but is included since it is a commonly used operation on a set in C++. It is simply implemented as a binary search over the elements in both the vector and tiered vector tests where the elements are now inserted in sorted order. The number of tests and operations is the same as in the other tests. The results are summarized in Table 1 which shows that the vector performs slightly better than the tiered vector on all access and successor tests. As expected from the $\Theta(n)$ running time, it performs extremely poor on random insertion and deletion. For insertion in the end of the sequence, vector is also slightly faster than the tiered vector. The interesting part is that even though the tiered vector requires several extra memory lookups and computations, we have managed to get the running time down to less than the double of the vector for access, even less for data dependent and only a few percent slowdown for range access. As discussed earlier, this is most likely because the entire tree structure (without the elements) fits within the CPU cache, and because the computations required has been minimized. Comparing our tiered vector to set, we would expect access operations to be faster since they run in $O(1)$ time compared to $O(\log n)$. On the other hand, we would expect insertion/deletion to be significantly slower since it runs in $O(n^{1/3})$ time compared to $O(\log n)$ (where $l = 4$ in these tests). We see our expectations hold for the access operations where the tiered vector faster by more than an order of magnitude. In random insertions however, the tiered vector is only 8% slower – even when operating on 100,000,000 elements. Both the tiered vector and set requires $O(\log n)$ time for the successor operation. In our experiment the tiered vector is 3 times faster for the successor operation. Finally, we see the memory usage of vector and tiered vector is almost identical. This is expected since in both cases it is primarily the elements themselves that take up space. The set uses more than 10 times as much space, so this is also a considerable drawback of the red-black tree behind this structure. To sum up, the tiered vectors performs better on all tests but insertion, but is even here highly competitive. 7 Tiered Vector Experiments In this section we compare different variants of the tiered vector. We first consider how the performance of the different representations of the data structure listed in Section 5, and also how the height of tree and the capacity of the leaves affects the running time. Afterwards we compare it to some widely used C++ standard library containers. Environment. All experiments have been performed on a Intel Core i7-4770 CPU @ 3.40GHz with 32 GB RAM. The code has been compiled with GNU GCC version 5.4.0 with flags “-O3”. The reported times are an average over 10 test runs. Table 1 The table summarizes the performance of the implicit tiered vector compared to the performance of set and vector from the C++ standard library. dd-access refers to data dependent access. <table> <thead> <tr> <th>Access Method</th> <th>tiered vector</th> <th>set</th> <th>set / tiered</th> <th>vector</th> <th>vector / tiered</th> </tr> </thead> <tbody> <tr> <td>access</td> <td>34.07 ns</td> <td>1432.05 ns</td> <td>42.03 ns</td> <td>21.63 ns</td> <td>0.63</td> </tr> <tr> <td>dd-access</td> <td>99.09 ns</td> <td>1436.67 ns</td> <td>14.50 ns</td> <td>79.37 ns</td> <td>0.80</td> </tr> <tr> <td>range access</td> <td>0.24 ns</td> <td>13.02 ns</td> <td>53.53 ns</td> <td>0.23 ns</td> <td>0.93</td> </tr> <tr> <td>insert</td> <td>1.79 µs</td> <td>1.65 µs</td> <td>0.92</td> <td>21675.49 µs</td> <td>12082.33 µs</td> </tr> <tr> <td>insertion in end</td> <td>7.28 ns</td> <td>242.90 ns</td> <td>33.38 ns</td> <td>2.93 ns</td> <td>0.40</td> </tr> <tr> <td>successor</td> <td>0.55 µs</td> <td>1.53 µs</td> <td>2.75</td> <td>0.36 µs</td> <td>0.65</td> </tr> <tr> <td>delete</td> <td>1.92 µs</td> <td>1.78 µs</td> <td>0.93</td> <td>21295.25 µs</td> <td>11070.04 µs</td> </tr> <tr> <td>memory</td> <td>408 MB</td> <td>4802 MB</td> <td>11.77</td> <td>405 MB</td> <td>0.99</td> </tr> </tbody> </table> Figure 2 Figures (a) and (b) show the performance of the original (---), optimized original (---), lazy (---) packed lazy (---), implicit (---) and packed implicit (---) layouts. Procedure. In all tests $10^8$ 32-bit integers are inserted in the data structure as a preliminary step to simulate that it has already been used\(^1\). For all the access and successor operations $10^9$ elements have been accessed and the time reported is the average time per element. For range access, blocks of 10,000 elements have been used. For insertion/deletion $10^6$ elements have been (semi-)randomly\(^2\) added/deleted, though in the case of “vector” only 10,000 elements were inserted/deleted to make the experiments run within reasonable time. 7.1 Tiered Vector Variants Experiments In this test we compare the performance of the implementations listed in Section 5 to that or the original data structure as described in Theorem 1. Optimized Original. By co-locating the child offset and child pointer, the two memory lookups are at adjacent memory locations. Due to the cache lines in modern processors, this means the second memory lookup will often be answered directly by the fast L1-cache. As can be seen on Figure 2, this small change in the memory layout results in a significant improvement in performance for both access and insertion. In the latter case, the running time is more than halved. --- \(^1\) In order to minimize the overall running time of the experiments, the elements were not added randomly, but we show this does not give our data structure any benefit. \(^2\) In order to not impact timing, a simple access pattern has been used instead of a normal pseudo-random generator. Lazy and Packed Lazy. Figure 2 shows how the fewer memory probes required by the lazy implementation in comparison to the original and optimized original results in better performance. Packing the offset and pointer in the leaves results in even better performance for both access and insertion even though it requires a few extra instructions to do the actual packing and unpacking. Implicit. From Figure 2, we see the implicit data structure is the fastest. This is as expected because it requires fewer memory accesses than the other structures except for the packed lazy which instead has slight computational overhead due to the packing and unpacking. As shown in Theorem 2 the implicit data structure has a bigger memory overhead than the lazy data structure. Therefore the packed lazy representation might be beneficial in some settings. Packed Implicit. Packing the offsets array could lead to better cache performance due to the smaller memory footprint and therefore yield better overall performance. As can be seen on Figure 2, the smaller memory footprint did not improve the performance in practice. The simple reason for this, is that the strategy we used for packing the offsets required extra computation. This clearly dominated the possible gain from the hypothesized better cache performance. We tried a few strategies to minimize the extra computations needed at the expense of slightly worse memory usage, but none of these led to better results than when not packing the offsets at all. 7.2 Width Experiments This experiment was performed to determine the best capacity ratio between the leaf nodes and the internal nodes. The six different width configurations we have tested are: 32-32-32-4096, 32-32-64-2048, 32-64-64-1024, 64-64-64-512, 64-64-128-256, and 64-128-128-128. All configurations have a constant height 4 and a capacity of approximately 130 mio. We expected the performance of access operations to remain unchanged, since the number of operations it must perform only depends on the height of the tree, and not the widths. We expect range access to perform better when the leaf size is increased, since more elements will be located in consecutive memory locations. For insertion there is not a clearly expected behavior as the time used to physically move elements in a leaf will increase with leaf size, but then less operations on the internal nodes of the tree has to be performed. On Figure 3 we see access times are actually decreasing slightly when leaves get bigger. This is a bit unexpected, but is most likely due to small changes in the memory layout that results in slightly better cache performance. The same is the case for range access, but this was expected. For insertion, we see there is a tipping point. For our particular instance, the best performance is achieved when the leaves have size around 512. Based on this, we have performed the remaining tests with the 64-64-64-512 configuration (unless otherwise specified). 7.3 Height Experiments In these tests we have studied how different heights affect the performance of access and insertion operations. We have tested the configurations 8196-16384, 512-512-512, 64-64-64-512, 16-16-32-32-512, 8-8-16-16-16-512. All resulting in the same capacity, but with heights in the range 2-6. We expect the access operations to perform better for lower trees, since the number of operations that must be performed is linear in the height. On the other hand we expect insertion to perform significantly better with higher trees, since its running time is \( O\left(\frac{n^{1/l}}{l}\right) \) where \( l \) is one the height plus one. On Figure 4 we see the results follow our expectations. However, the access operations only perform slightly worse on higher trees. We expect this to be because all internal nodes fit within the L3-cache. Therefore the dominant running time comes from the lookup of the element itself. (It is highly unlikely that the element requested by an access to a random position would be among the small fraction of elements that fit in the L3-cache). Regarding insertion, we see significant improvements up until a height of 4 after that, increasing the height does not change the running time noticeably. This is most likely due to the hidden constant in \( O(n^{1/l}) \) increases rapidly with the height. 7.4 Configuration Experiments In these experiments, we test a few hypotheses about how different changes impact the running time. The results are shown on Figure 5, the leftmost result (base) is our final and best implementation to which we compare our hypotheses. Rotated: As already mentioned, the insertions performed as a preliminary step to the tests are not done at random positions. This means that all offsets are zero when our real operations start. The purpose of this test is to ensure that there are no significant performance gains in starting from such a configuration which could otherwise lead to misleading results. To this end, we have randomized all offsets (in a way such that the data structure is still valid, but the order of elements change) after doing the preliminary insertions but before timing the operations. As can be seen on Figure 5, the difference between this and the normal procedure is insignificant, thus we find our approach gives a fair picture. Non-Aligned Sizes: In all our previous tests, we have ensured all nodes had an out-degree that was a power of 2. This was chosen in order to let the compiler simplify some calculations, i.e. replacing multiplication/division instructions by shift/and instructions. As Figure 5 shows, using sizes that are not powers of 2 results in significantly worse performance. Besides from showing that one should always pick powers of 2, it also indicates that not only the number of memory accesses during an operation is critical for our performance, but also the amount of computation we make. Non-Templated: The non-templated results in Figure 2 the show that the change to templated recursion has had a major impact on the running time. It should be noted that some improvements have not been implemented in the non-templated version, but it gives a good indication that this has been quite useful. 8 Conclusion This paper presents the first implementation of a generic tiered vector supporting any constant number of tiers. We have shown a number of modified version of the tiered vector, and employed several speed optimizations to the implementation. These implementations have been compared to vector and multiset from the C++ standard library. The benchmarks show that our implementation stays on par with vector for access and on update operations while providing a considerable speedup of more than $40 \times$ compared to set. At the same time the asymptotic difference between the logarithmic complexity of multiset and the polynomial complexity of tiered vector for insertion and deletion operations only has little effect in practice. For these operations, our fastest version of the tiered vector suffers less than 10% slowdown. Arguably, our tiered array provides a better trade-off than the balanced binary tree data structures used in the standard library for most applications that involves big instances of the dynamic array problem. References
{"Source-Url": "http://drops.dagstuhl.de/opus/volltexte/2017/7830/pdf/LIPIcs-ESA-2017-16.pdf", "len_cl100k_base": 9475, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 44741, "total-output-tokens": 11168, "length": "2e13", "weborganizer": {"__label__adult": 0.0004534721374511719, "__label__art_design": 0.0004544258117675781, "__label__crime_law": 0.0004124641418457031, "__label__education_jobs": 0.0004448890686035156, "__label__entertainment": 9.953975677490234e-05, "__label__fashion_beauty": 0.0002046823501586914, "__label__finance_business": 0.0002536773681640625, "__label__food_dining": 0.00048470497131347656, "__label__games": 0.0007176399230957031, "__label__hardware": 0.0019664764404296875, "__label__health": 0.0007271766662597656, "__label__history": 0.0003960132598876953, "__label__home_hobbies": 0.0001232624053955078, "__label__industrial": 0.0005688667297363281, "__label__literature": 0.0002613067626953125, "__label__politics": 0.00034809112548828125, "__label__religion": 0.0007352828979492188, "__label__science_tech": 0.05316162109375, "__label__social_life": 9.071826934814452e-05, "__label__software": 0.005313873291015625, "__label__software_dev": 0.93115234375, "__label__sports_fitness": 0.0004091262817382813, "__label__transportation": 0.0007643699645996094, "__label__travel": 0.0002732276916503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40648, 0.0323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40648, 0.41094]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40648, 0.88225]], "google_gemma-3-12b-it_contains_pii": [[0, 3218, false], [3218, 7112, null], [7112, 9657, null], [9657, 14343, null], [14343, 17742, null], [17742, 21194, null], [21194, 24120, null], [24120, 27902, null], [27902, 30853, null], [30853, 33557, null], [33557, 35466, null], [35466, 38873, null], [38873, 40648, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3218, true], [3218, 7112, null], [7112, 9657, null], [9657, 14343, null], [14343, 17742, null], [17742, 21194, null], [21194, 24120, null], [24120, 27902, null], [27902, 30853, null], [30853, 33557, null], [33557, 35466, null], [35466, 38873, null], [38873, 40648, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40648, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40648, null]], "pdf_page_numbers": [[0, 3218, 1], [3218, 7112, 2], [7112, 9657, 3], [9657, 14343, 4], [14343, 17742, 5], [17742, 21194, 6], [21194, 24120, 7], [24120, 27902, 8], [27902, 30853, 9], [30853, 33557, 10], [33557, 35466, 11], [35466, 38873, 12], [38873, 40648, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40648, 0.06098]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
74db3b57ecd078ac146e38baed1fea794aa0ffe5
Using Firebird (work in progress) IBPhoenix Editors, Firebird Project Version 2.0.2, 16 July 2007 ## Table of Contents 1. About this book ................................................................. 3 1.1. Work in progress! ...................................................... 3 1.2. Origins ...................................................................... 3 1.3. More documentation .................................................. 3 2. About Firebird ................................................................. 4 2.1. Firebird's origins ...................................................... 4 2.2. The Firebird Foundation .............................................. 4 2.3. Overview of Features ................................................ 4 2.3.1. Firebird Server .................................................. 5 2.3.2. Firebird clients .................................................. 5 2.3.3. Summary of features .......................................... 5 2.4. Classic and Superserver architectures .......................... 10 2.4.1. Comparison of characteristics ............................... 11 2.4.2. Which is better? ................................................. 14 2.4.3. Embedded server ............................................... 14 2.5. System Requirements .............................................. 15 2.5.1. Server Memory (all platforms) .............................. 15 2.5.2. Disk space ........................................................ 15 2.5.3. Minimum machine specifications ............................ 15 3. About Clients and Servers ............................................... 17 3.1. What is a Firebird client? .......................................... 17 3.2. The Firebird client library ....................................... 18 3.2.1. Client filenames ................................................. 18 3.3. The server ............................................................ 19 3.3.1. Server tasks ..................................................... 19 3.3.2. Multiple servers .............................................. 20 3.3.3. Server filenames .............................................. 20 3.4. Client and server combined: Firebird Embedded Server .... 20 3.4.1. Embedded server on Windows .............................. 20 3.4.2. Embedded server deployment .............................. 21 3.4.3. Embedded server on Linux? ................................. 22 3.5. Application development .......................................... 23 3.5.1. Embedded SQL in Firebird applications .................. 23 3.5.2. Predefined vs. dynamic queries ............................ 23 3.5.3. RAD environments and component suites ............... 24 3.5.4. Other connectivity components and drivers ............ 26 3.5.5. API applications .............................................. 26 3.6. Server-side programming ......................................... 28 3.6.1. Stored procedures ................................................................. 28 3.6.2. Triggers .................................................................................. 28 3.6.3. PSQL limitations ................................................................. 28 3.6.4. User-defined functions ....................................................... 29 3.7. Multi-database applications ................................................... 29 Appendix A: Document history ....................................................... 30 Appendix B: License notice ............................................................. 32 Chapter 1. About this book 1.1. Work in progress! This document is a work in progress. The Firebird documenters — all volunteers who work on this project in their spare time — will publish more chapters as they go along. As we are but a few, the pace is pretty slow. However, the few chapters that are present are up to date and will hopefully be a useful source of information for you. If you find any errors or inconsistencies, please report them to the maintainers. Email addresses are at the end of the book. 1.2. Origins Using Firebird is not a new book. The IBPhoenix editors wrote it years ago for distribution on their Developer's CD, when Firebird was still at version 1.0. Since then, we have seen the arrival of Firebird 1.5 and 2.0, and most of the book is now in serious need of updating. In 2005 the IBPhoenix company decided to open-source the entire 26-chapter manual and hand it over to the Firebird Project for upgrading and maintenance. We would like to thank IBPhoenix here for making this work freely available. 1.3. More documentation If you're new to Firebird or want to brush up on the basics, read the Firebird Quick Start Guide first. There's one available for every version of Firebird. Pick up yours from https://www.firebirdsql.org/en/documentation/. This is also a good starting place to find links to more Firebird documentation. Chapter 2. About Firebird Firebird is a powerful, compact client/server SQL relational database management system which can run on a variety of server and client operating systems. Its officially supported platforms are Windows and Linux, but Firebird is also known to run on several other OS’s, such as FreeBSD and Apple Macintosh OS/X. Firebird features a higher level of SQL standards compliance than most other industrial-strength client/server database management systems on the market today, while implementing some powerful language features in the vendor-specific sphere of procedure programming. 2.1. Firebird’s origins The product which today we call Firebird has been around, under a variety of names, for well over 20 years. An overview of its interesting and at times stormy history can be found at https://www.firebirdsql.org/en/historical-reference/. Developed as an ongoing open source project, Firebird is a descendant of Borland’s InterBase 6.0 Open Edition code which was released for open source development in July 2000 under the InterBase Public License (IPL). The Firebird source code tree is maintained on the international open source code foundry, GitHub, by a large team of professional developers who donate time and expertise voluntarily to fix, develop and enhance this popular and feature-rich database management software. The Firebird software products are distributed completely free of registration or deployment fees. 2.2. The Firebird Foundation The Firebird Foundation supports the development of Firebird in several ways, among other things by issuing grants to developers. Many people and companies who find Firebird useful have already become members or sponsors. If you like Firebird, please consider doing the same. Making a one-time donation is also possible. You can find more information at https://www.firebirdsql.org/en/firebird-foundation/. 2.3. Overview of Features Firebird is true client/server software, architected for use in local and wide-area networks. Accordingly, its core consists of two main software programs: 1. The database server, which runs on a network host computer. 2. The client library, through which users on remote workstations connect to and communicate with databases managed by the server. TCP/IP is the network protocol of choice for Firebird on all platforms, although Windows Networking (NetBEUI) is supported too for networks having Firebird running on a Windows NT, 2000/2003 or XP host server. It is possible to run both server and client on the same physical machine and have the client connect to the server through TCP/IP local loopback. On Windows machines, a single local client can also connect to a database by sharing inter-process communications memory with the Firebird server. On Linux, even direct I/O to the database file is possible, but only with the so-called Classic Server — more on this later. ### 2.3.1. Firebird Server Firebird server runs on a number of platforms, including: - Windows NT 4.0, 2000, and 2003 (Server or Workstation editions) - Windows 95/98 and ME - Windows XP (Home, Professional and .NET editions) - Linux, FreeBSD and several other UNIX-like operating systems - MacOS X (Darwin) The Firebird Embedded Server is a special variant which contains both the client and the server functionality. You can ship it with your application, unpack it, and it’s ready to roll. You’ll learn more about its up- and downsides later on in this guide. ### 2.3.2. Firebird clients A remote workstation or a local client requires only the shared client library — a dynamic link library on Windows and a shared object on other platforms — and an application program which can pass and receive parameters to and from the library’s interface. Generally, you would also install a copy of the client library on the host server, for use with several of the Firebird command-line utilities and/or any server-based management programs you might use. Many of these utilities can also be run remotely, however. A remote system administrator can manage some of the essential services provided by these utilities by accessing them through a host service controller process. For Java connectivity, Firebird provides the JDBC/JCA-compliant Jaybird driver. Client applications written against Jaybird can run on any Java-enabled platform, even those that don’t support Firebird server. The legacy InterClient Java driver is no longer maintained, due to its severe limitations. ### 2.3.3. Summary of features Table 1. Summary of features <table> <thead> <tr> <th><strong>Firebird Feature</strong></th> <th><strong>Description</strong></th> </tr> </thead> <tbody> <tr> <td><strong>SQL compliance</strong></td> <td>Firebird conforms to entry-level SQL-92 requirements. It has support for formal, cascading referential integrity constraints, updatable views, and full, left and right outer joins. Client applications can link to the Firebird API, a messenger function library for client-server communication.</td> </tr> <tr> <td></td> <td>The Firebird server supports development of dynamic SQL client applications. It also ships with a host-language precompiler and in-engine language support for embedded SQL development in host languages such as C/C++ and COBOL.</td> </tr> <tr> <td></td> <td>Several extended SQL features are also implemented. Some of them (e.g. stored procedures and triggers, SQL roles, and segmented blob support) anticipate SQL99 extensions.</td> </tr> <tr> <td><strong>Multiuser database access</strong></td> <td>Firebird is designed to provide for many clients accessing a single database at the same time. In their turn, client applications can have active connections to several databases simultaneously. Firebird will automatically protect cross-database transactions through a two-phase commit mechanism. Triggers and stored procedures can post event messages to inform interested clients of specific events in the database.</td> </tr> <tr> <td><strong>User-defined functions</strong></td> <td>User-defined functions (UDFs) can be written and stored on the server machine in external shared object libraries. Once a UDF is declared to a Firebird database as an external function, it is available to any client application accessing the database, as if it were a native function of the SQL language.</td> </tr> <tr> <td></td> <td>This flexibility accounts for the very small footprint of the server engine: Firebird database application solutions are deployed without the extra cargo of a server that supports hundreds of unused functions natively in its engine.</td> </tr> <tr> <td><strong>Transactions</strong></td> <td>Firebird client applications have full control over the starting, committing, and rolling back of transactions. Every transaction exists in its own consistent context, determining isolation from other transactions and resolution of multi-user conflicts at commit time.</td> </tr> <tr> <td></td> <td>A transaction’s uncommitted view of the state of the database is kept consistent with its initial view and any changes which are made within its own context.</td> </tr> <tr> <td></td> <td>Client applications can isolate multiple tasks in separate transactions simultaneously. A single transaction can bridge a task involving an unlimited number of connected databases, with an automatic two-phase commit mechanism to protect integrity, should a database become unavailable before the transaction completes.</td> </tr> <tr> <td><strong>Firebird Feature</strong></td> <td><strong>Description</strong></td> </tr> <tr> <td>----------------------</td> <td>----------------</td> </tr> </tbody> </table> | **Multigenerational architecture** | Firebird uses a multi-generational architecture, by which multiple versions of each data row can be created and stored as necessary if a transaction modifies the row. In a background thread, extinct versions are garbage-collected and the current and pending versions are managed, in order to give each transaction a persistent view and to resolve priorities when update conflicts occur. The multi-generational architecture of Firebird means that readers never block writers. Firebird allows any row to be visible to any transaction, even if other transactions have updates pending for it. Readers may of course see another (older) version of the row than the writer. The Firebird engine maintains version statistics which it uses, in conjunction with the isolation and lock response attributes of each transaction, to determine which transaction gets priority when conflicting updates are requested. | | **Optimistic row-level locking** | In Firebird, user-initiated locking is unnecessary. The engine locks a row to other transactions only when a transaction signals that it is ready to update it. This is known as optimistic row-level locking. This style of locking has great advantages in increasing throughput and reducing serialisation for client tasks, when compared with systems that lock rows, or even entire tables, from the moment the transaction begins. | | **BLOB filters** | Firebird provides the capability for the developer to supply filter code for converting stored BLOBs from one format to another. For example, a BLOB filter could be written to output a text BLOB, stored in RichText format, as XML or HTML; or to output a stored JPEG image in PNG format. The filters, written in the developer's language of choice and compiled for the server platform OS, are stored on the server machine in a shared object library and must be declared to databases that want to use them, exactly like UDF libraries. | | **Database administration** | Firebird comes with various command-line utilities for managing databases and servers. Thanks to its open source character, Firebird is also abundantly supported by open source, freeware and commercial GUI database administration utilities. Using his or her preferred constellation of tools, the database administrator can - manage server security; - make and restore database backups; - perform maintenance tasks; - produce database and lock manager statistics. | <table> <thead> <tr> <th>Firebird Feature</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Security</td> <td>Firebird maintains a security database storing user names and encrypted passwords. It is located in the root directory of the server installation and controls access to the server itself and all databases in its physical domain. The SYSDBA account has full, destructive privileges to all databases on the server. Firebird provides the capability to define roles at database level. Within a database, only SYSDBA and the database owner have full privileges; otherwise, all privileges must be granted explicitly to individual users and/or roles. It is possible — and recommended — to define a set of permissions for a role and then grant that role to specific users as required. SYSDBA can add and delete users' accounts names and modify the details of an account, including the password. Passwords, once stored, are not human-readable, even by SYSDBA. Physical database paths can be shielded from the client using aliases. Access to database files, external tables, and UDFs can be restricted to explicitly specified filesystem trees only — or even tighter — by setting the appropriate parameters in the configuration file firebird.conf. The Firebird server process can — and if possible, should — run as a user other than the system or superuser account (root, Administrator or localsystem). This will limit the damage in the unfortunate event that the server should be hacked.</td> </tr> </tbody> </table> ## Backups and restores Firebird comes with two command-line backup/restore tools, each with its own specific advantages and limitations. The `gbak` utility backs up a database by dismantling it into a compact structure in which metadata, data and database-level configuration settings are stored separately. It also performs some important housekeeping tasks on the database during the backup process. The generated backup is not readable as a database file; you need `gbak` again to restore it. In restore mode, `gbak` can create a new file or overwrite an existing database. Because of the useful tasks it performs, experienced Firebird programmers often use a `gbak` backup-restore cycle to: - erase obsolete record versions; - change the database page size; - convert the database from single- to multifile; - safely transfer a database to another operating system; - upgrade InterBase or Firebird databases to a newer version; - make a metadata-only backup in order to create a new, empty database with the same structure. Several user-friendly GUI front-ends are available for `gbak`, both as stand-alone tools and as utilities within some of the database administration programs. It is also very simple to set up OS-level scripts, batch files or daemons to perform backups. A more recent tool by the name of `nbackup` lacks most of `gbak`'s housekeeping and compaction features, but has the following advantages: - Incremental backups, which save time and disk space; - Backups at hardware speed; - Backups possible with your own preferred (non-Firebird) tool. Neither backup tool requires exclusive access to the database. Other clients can remain connected and perform operations on the database while the backup is in progress. **Firebird Feature** | **Description** ---|--- **Other tools** | Firebird ships with several other command-line administration tools, including: **isql** - An SQL query utility which can run dynamic SQL (DSQL) and several specialised statements interactively or in batch from a script. This is the tool to use for quick access to information about your metadata and for running data definition scripts. **gfix** - A database housekeeping and repair kit for minor corruptions. This tool is often used in combination with some of the utilities in the gbak program for identifying and recovering damaged data. **gsec** - A command-line interface to the security database. **gstat** - A utility for printing out the current configuration and statistics of a running database. **fb_lock_print** - A utility for printing out the Lock Manager report on a running database. **Services API** | Firebird provides a Services API which developers can use to perform a number of security and management tasks programmatically (and if needed, remotely). Strictly speaking, the Services API (part of the client library) is the interface to the Services Manager (part of the server), but the terms are often used interchangeably. ### 2.4. Classic and Superserver architectures Firebird server comes in two distinct architectures for managing multiple client connections: **Superserver** and **Classic Server**. For Windows, both architectures are included in a single download. For Linux, there are separate download packages which have either CS or SS in their name, indicating the type of server. The Classic server starts a separate process for each connection to a database under its control. Each client is allocated its own database cache buffers. Superserver serves many clients simultaneously within a single process. Instead of separate server processes for each connection it uses threads of a single process and pools the database cache buffers for use by all connections. If you are upgrading from a previous version of Firebird or faced with the choice between Classic and Superserver, the information listed in the comparison table below will help to explain what the differences are and how they affect database operations. The server architecture does not affect the structure of databases or the way client applications work. Firebird databases built on a Classic server can be operated on by an equivalent Superserver server, and vice versa. The same client library can connect to either server. In other words, if you begin by installing the Superserver distribution of Firebird on your Linux host machine and, later, decide to change to Classic, any applications you wrote for your Superserver-hosted databases will work unmodified and the databases themselves will continue to behave as they did before. ### 2.4.1. Comparison of characteristics The table below gives a quick overview of the main differences between Classic and Superserver. These differences will be discussed in more detail in the subsections that follow. **Table 2. Comparison of Classic and Superserver architectures** <table> <thead> <tr> <th>FEATURE</th> <th>ARCHITECTURE</th> </tr> </thead> <tbody> <tr> <td><strong>Availability</strong></td> <td><strong>Classic</strong></td> </tr> <tr> <td></td> <td>Linux: All Firebird versions</td> </tr> <tr> <td></td> <td>Windows: Firebird 1.5 and higher</td> </tr> <tr> <td></td> <td><strong>SuperServer</strong></td> </tr> <tr> <td></td> <td>All Firebird versions</td> </tr> <tr> <td><strong>Executable</strong></td> <td>fb_inet_server(.exe)</td> </tr> <tr> <td></td> <td>fbserver(.exe)</td> </tr> <tr> <td><strong>Processes</strong></td> <td>Multiple, on demand, one instance per client connection.</td> </tr> <tr> <td></td> <td>Single server process, each client request is handled in its own thread.</td> </tr> <tr> <td><strong>Lock management</strong></td> <td>gds_lock_mgr process.</td> </tr> <tr> <td></td> <td>Implemented as a thread.</td> </tr> <tr> <td><strong>Local access on Linux</strong></td> <td>Fast, direct I/O to the database file is possible. But you can also connect</td> </tr> <tr> <td></td> <td>network-style via localhost.</td> </tr> <tr> <td></td> <td>Network-style access only.</td> </tr> <tr> <td><strong>Local access on Windows</strong></td> <td>Versions 1.x: network-style access only.</td> </tr> <tr> <td></td> <td>Versions 1.x: single (!) local connections can be made using IPC (IPServer).</td> </tr> <tr> <td></td> <td>Network-style local connections are also supported.</td> </tr> <tr> <td></td> <td>**Firebird 2 and higher: both architectures support safe, multiple local</td> </tr> <tr> <td></td> <td>connections on Windows machines through XNET.</td> </tr> <tr> <td><strong>Resource use</strong></td> <td>One cache per process.</td> </tr> <tr> <td></td> <td>One cache space for all clients.</td> </tr> <tr> <td><strong>Multiprocessor support</strong></td> <td>Yes.</td> </tr> <tr> <td></td> <td>No. Performance may drop if not properly configured.</td> </tr> <tr> <td><strong>Services Manager + API</strong></td> <td>Partial in Firebird 1.5, full in 1.5.1 and up.</td> </tr> <tr> <td></td> <td>Full.</td> </tr> <tr> <td><strong>Guardian on Windows</strong></td> <td>On Firebird 2 Classic/Win only, a bug prevents you from using the Guardian</td> </tr> <tr> <td></td> <td>if you run Firebird as an application.</td> </tr> <tr> <td></td> <td>The Guardian functions with all Windows Superservers, whether run as a</td> </tr> <tr> <td></td> <td>service or as an application.</td> </tr> </tbody> </table> Chapter 2. About Firebird --- 11 Guardian on Linux You can’t use the Guardian with any Firebird Classic version on Linux. This is by design. The Guardian functions with all Linux Superservers. ### Executable and processes **Classic** Runs on demand as multiple processes. When a client attempts to connect to a Firebird database, one instance of the `fb_inet_server` executable is initiated and remains dedicated to that client connection for the duration of the connection. **Superserver** Runs as a single process, an invocation of the `fbserver` executable. `fbserver` is started once by the owning user or by a boot script. This process runs always, waiting for connection requests. Even when no client is connected to a database on the server, `fbserver` continues to run quietly. On Linux, the Superserver process does not depend on `inetd`; it waits for connection requests to the `gds_db` service itself. ### Lock management **Classic** For every client connection a separate server process is started to execute the database engine, and each server process has a dedicated database cache. The server processes contend for access to the database, so a Lock Manager subsystem is required to arbitrate and synchronise concurrent page access among the processes. **Superserver** The lock manager is implemented as a thread in the `fbserver` executable. It uses inter-thread communication mechanisms to resolve access issues. Therefore, an external process isn’t needed. ### Resource use **Classic** Each instance of `fb_inet_server` keeps a cache of database pages in its memory space. While the resource use per client is greater than in Superserver, Classic uses fewer overall resources when the number of concurrent connections is low. **Superserver** Employs one single cache space which is shared by client attachments, allowing more efficient use and management of cache memory when the number of simultaneous connections grows larger. ### Local access on Linux **Classic** On Linux only, the Classic architecture permits application processes that are running on the same machine as the database and server to perform I/O on database files directly. Note that this is only possible if the client process has sufficient filesystem-level access rights to the database as well as some other files. Network-style access to the local server (via localhost or equivalents) is supported on all systems. **Superserver** You can only connect to local databases via TCP/IP loopback, using localhost or any other host name / IP number that points back to the local machine. (Many clients may let you get away with omitting the hostname though, and supply localhost to the server automatically.) **Local access on Windows** **Classic** In Windows Classic versions prior to Firebird 2, you can only connect to local databases via network loopback, using localhost or an equivalent. Firebird 2 and higher support local access through the reliable XNET protocol, which permits multiple simultaneous connections in a safe way. **Superserver** Firebird 1.5 and earlier Superservers use the IPC (IPServer) protocol for single local connections on Windows. This method is not as fast and certainly not as robust as the direct I/O on Linux Classic. Furthermore, IPC needs an internal window to exchange messages. As a consequence, local access on these versions is only available if: - the Firebird process runs as localsystem (the default), and - the configuration parameter CreateInternalWindow has not been set to 0 (you would set this to 0 if you want to run multiple servers simultaneously). Firebird 2 uses a different local protocol — XNET — which doesn’t suffer from these restrictions, and supports multiple connections. Of course if local protocol is disabled you can still connect to any local database via localhost, provided TCP/IP is available on your system. **Multiprocessor support** **Classic** Supports SMP (symmetrical multi-processor) systems. This improves the performance in case of multiple unrelated connections. **Superserver** No SMP support. In fact, Superserver performance may drop significantly on multiprocessor Windows systems as a result of processor swapping. To prevent this from happening, set the CpuAffinityMask parameter in the configuration file firebird.conf. **Services Manager and Services API** **Classic** Versions up to and including 1.5 have a partially implemented Services Manager, supporting tasks like backup/restore, database shutdown etc. over the network. Other service tasks have to be performed locally using the client tools that come with Firebird. Versions 1.5.1 and up have a full Services Manager, just like Superserver. **Superserver** The Services Manager, present in all Firebird Superserver versions, allows you to perform management tasks (backup/restore, database shutdown, user management, stats, etc.) programmatically. You can connect to the Services Manager over the network and thus perform these tasks remotely. **Use of the Firebird Guardian** The Firebird Guardian is a utility which monitors the server process and attempts to restart the server if it terminates abnormally. **Classic** Due to a bug in the Guardian, it can’t be used with Firebird 2 Classic on Windows if run as an application. If Firebird runs as a service, the Guardian works correctly. Since the Windows 9x–ME line doesn’t support services, you can’t use the Guardian with Firebird 2 Classic on those systems. This bug does not exist in Firebird 1.5 versions. (The Guardian can’t be used *at all* with Firebird Classic on Linux, but that’s by design, not by accident.) **Superserver** The Guardian works fine with Superserver on both Linux and Windows, whether as a service or as an application. **2.4.2. Which is better?** In abstract terms, neither architecture is a clear winner. One architecture generally outshines the other under specific workload conditions: - A single application running on the same machine as the server is faster with the Classic architecture. - For a Linux application embedded in an appliance, Classic is better, because it provides a single process from application to disk. - On a single-processor machine, an application with larger numbers of frequently contending clients is faster with Superserver, because of the shared cache. - On SMP machines, small numbers of clients whose data updates do not impact others’ tasks work better in the Classic architecture. **2.4.3. Embedded server** Besides Superserver and Classic, there’s Firebird Embedded Server for Windows, which you can download as a separate package. This is not really a different architecture, but a Firebird client plus Superserver rolled into one DLL for ease of deployment. Although it has a number of downsides, it may be an attractive option if you want to include Firebird with your Windows application. More on Embedded Server in the [client-server chapter](#). 2.5. System Requirements Firebird makes efficient use of system resources. Both server and clients are modest in their disk space and memory requirements. Some specific details are provided below. 2.5.1. Server Memory (all platforms) Table 3. Memory Requirements <table> <thead> <tr> <th>Firebird server process</th> <th>When there are no connections, the Firebird server uses around 2–4 Mb of memory, depending on the version.</th> </tr> </thead> <tbody> <tr> <td>Client connections</td> <td>Each client connection uses from 115 Kb to several Mb of additional memory on the server host. The exact load depends on the Firebird version, the structure of the database and the client characteristics.</td> </tr> <tr> <td>Database cache</td> <td>Memory is also needed for database page caching. The default cache size is configurable, in database pages. Superserver shares a single cache among all connections and increases cache automatically when required. Classic creates an individual cache per connection.</td> </tr> <tr> <td>Other server tasks</td> <td>The server uses additional memory for lock management, in-memory sorting, and so on. For some tasks the amount can be configured.</td> </tr> </tbody> </table> 2.5.2. Disk space Disk space requirements vary somewhat according to platform, architecture and Firebird version. Table 4. Approximate Disk Space Requirements <table> <thead> <tr> <th></th> <th>Firebird 1.5.x</th> <th>Firebird 2</th> </tr> </thead> <tbody> <tr> <td>Complete server installation</td> <td>9–12 Mb</td> <td>12–14 Mb</td> </tr> <tr> <td>Client library</td> <td>350 Kb – 2 Mb&lt;sup&gt;[1]&lt;/sup&gt;</td> <td>380 Kb – 2.5 Mb&lt;sup&gt;[1]&lt;/sup&gt;</td> </tr> <tr> <td>Command-line tools</td> <td>1.5 Mb</td> <td>1.7–2.7 Mb</td> </tr> <tr> <td>Temporary server space</td> <td>Additional disk space is required for temporary storage during operation, e.g. for sorting. Location(s) and maximum amount of space used can be configured according to performance demands and the likely volume and type of data to be handled.</td> <td></td> </tr> </tbody> </table> In addition, third-party database management utilities will require 1 Mb to several dozen Mb, depending on which one(s) you choose. 2.5.3. Minimum machine specifications Wherever Intel processors are mentioned, the equivalent or better AMD processors can also be used. Table 5. Minimum machine specifications <table> <thead> <tr> <th>OS</th> <th>Version</th> <th>CPU</th> <th>RAM</th> </tr> </thead> <tbody> <tr> <td>Microsoft Windows</td> <td>NT 4.0 with Service Pack 6a Windows 95/98/ME Windows 2000 (SP1) / 2003 Windows XP</td> <td>486DX2 66 MHz (Pentium 100 recommended)</td> <td>16Mb for client 64Mb for multi-client server</td> </tr> <tr> <td>Linux</td> <td>1.0</td> <td>Intel 486</td> <td>16Mb for client 64Mb for multi-client server</td> </tr> <tr> <td></td> <td>1.5</td> <td>Pentium</td> <td></td> </tr> <tr> <td></td> <td></td> <td>glibc 2.2.5, libstdc++ 5.0</td> <td></td> </tr> <tr> <td></td> <td></td> <td>RedHat 8.0, Mandrake 9.0, SuSE 8.0</td> <td></td> </tr> <tr> <td></td> <td></td> <td>On SuSE 7.3, first install libgcc-3.2-44.i586.rpm and libstdc++-3.2-44.i586.rpm</td> <td></td> </tr> <tr> <td>Solaris</td> <td>2.6 or 2.7</td> <td>SPARC, UltraSPARC</td> <td>16Mb for client 64Mb for multi-client server</td> </tr> <tr> <td>Solaris</td> <td>?</td> <td>Intel</td> <td>32 Mb 64 Mb for multi-client server</td> </tr> <tr> <td>Apple Macintosh</td> <td>Mac OS/X (Darwin)</td> <td>See distribution notes</td> <td>See distribution notes</td> </tr> <tr> <td>FreeBSD</td> <td>v.4.x</td> <td>See distribution notes</td> <td>See distribution notes</td> </tr> <tr> <td>HP-UX</td> <td>10.0</td> <td>See distribution notes</td> <td>See distribution notes</td> </tr> </tbody> </table> [1] The high end of the client library range is occupied by Linux Classic clients, which contain a complete Firebird engine. Chapter 3. About Clients and Servers In this chapter we take a look at the essential pieces of client/server systems as they are implemented in Firebird and examine how applications interact with the client and server. 3.1. What is a Firebird client? A Firebird client is a program, usually written in a high-level language such as C, C++, Delphi, Java, PHP or Perl, that provides end-user access to the features and tools of the Firebird database management system and to data stored in databases. The *isql* interactive SQL utility is an example of a client application. In the client/server model, applications never touch the database physically. Any application process converses with the server through the Firebird client library which resides on the client workstation. It surfaces a programming interface of function call structures known as the Firebird API. This client library must be installed on every user’s workstation. Generally, other layers are also involved in the interface between the application program and the Firebird client, providing generic or application-language-specific mechanisms for populating and calling the API functions. Firebird clients typically reside on remote workstations and connect to a Firebird server running on a host node in a network. Firebird also supports local connection, that is, a client application, the Firebird client library and the Firebird server all executing on the same physical box. Firebird clients need not run on the same type of hardware and/or operating system as the server they connect to. It is quite common to have a number of Windows 98 or XP workstations talking to a server that runs under Windows NT, 2000 or 2003, or any of several flavours of UNIX or Linux. 3.2. The Firebird client library The Firebird client library provides an Application Programming Interface (API) with functions for connecting to servers and working with databases. The library functions communicate with the server(s) using a dedicated Firebird client/server protocol that sits on top of the general network protocol provided by the OS. All client applications and middleware must use the API in some way to access Firebird databases. The Firebird API is backwardly compatible with the InterBase API. The InterBase API Guide (available at https://www.ibphoenix.com/downloads/60ApiGuide.zip) contains extensive documentation on the use the API in applications. Additional features available in the Firebird API are documented in the Firebird release notes. 3.2.1. Client filenames The Firebird client library files are fbclient.dll (Windows), libfbclient.so (Linux network client) and libfbembed.so (Linux local client with embedded engine, Classic only). In order not to break certain existing applications, a copy of fbclient.dll with the old name gds32.dll can be installed on Windows machines. On Linux, legacy libgds* symlinks are installed automatically. 3.3. The server The Firebird server is a program that runs on a machine with which client workstations can communicate by way of a network. Clients connect to databases physically located on this server host machine. The same machine that hosts the executing Firebird server process must host the Firebird databases in its own storage space. Only the server process has direct, filesystem-level access to the database files. The server is fully network-enabled, serving multiple connections simultaneously, in response to requests from other nodes in the network. If the network runs under TCP/IP protocol, the scope of the network is virtually limitless. In the Superserver architecture, the server process is multi-threaded. In Classic, a new process is started for each connection. 3.3.1. Server tasks The server’s job is to - regulate access by transactions to individual sets of data; - ensure that each transaction gets and keeps a consistent view of the permanently stored data which it has requested through the client; - receive requests to modify or delete a row and either: - grant a transaction exclusive write access to the row, or - deny access if another transaction already has a write pending; - maintain the statistics for each database; - maintain and refer to the metadata for each database, in order to manage transactions, data and “house-cleaning”. Clients’ requests result in the server performing tasks such as - creating new databases; - creating new data structures inside databases; - validating and compiling source code for procedures; - searching tables for data matching provided criteria; - collating, sorting and tabulating sets of data; - passing sets of data back to the requesting client; - modifying the values of data; - inserting new data into tables; - removing (deleting) data; - executing compiled procedures; - routing messages to clients. 3.3.2. Multiple servers Starting at version 1.5, you can have multiple Firebird servers running on the same machine. Firebird 1.5 servers can also coexist with a Firebird 1.0 or InterBase server. Some extra configuration steps are required though. One thing to be aware of is that Firebird 1.5 under Windows needs the CreateInternalWindow configuration parameter to be set to 0 in order to run multiple servers. As a side effect, the local connection protocol is disabled (you can still connect to local databases via localhost though). This limitation no longer exists in Firebird 2, and the CreateInternalWindow parameter has been removed in that version. 3.3.3. Server filenames Server binaries on Windows are called fbserver.exe (Superserver) and fb_inet_server.exe (Classic). On Linux, the same names are used, but without the .exe extension. 3.4. Client and server combined: Firebird Embedded Server 3.4.1. Embedded server on Windows The Windows Embedded Server is a Superserver engine plus client rolled into one library, called fbembed.dll. It is available as a separate download package. The embedded server (introduced with Firebird 1.5) was specifically designed to facilitate deployment of applications that ship with a Firebird server included. Installation is just a matter of unpacking the DLL and a few other files to the desired location. The security database is not used at all: anyone can connect to any database, as long as the user running the application has filesystem-level access rights to the database(s) in question. You can have multiple embedded servers running at the same time, and you can have multiple apps connecting to the same embedded server. Having a regular server already running isn’t a problem either. However, an embedded server locks a database file for its own exclusive use after successful connection. This means that you cannot access the same database from multiple embedded server processes simultaneously (or from any other servers, once an embedded server has locked the file). The embedded server has no facility to accept any network connections. Only true local access is possible, with a connect string that doesn’t contain a host name (not even localhost). Needless to say, this is not for your regular client-server database usage, as it bypasses a lot of security features. It’s using Firebird as a desktop database system. A Firebird embedded server DLL can also be used as a network client. That is, if a regular Firebird server is listening for connections on the target computer, you can connect to databases on that system using a network-style connect string like inca:C:\School\Databases\Pupils.fdb. This also implies that if a regular server is active on your local computer, you can connect to local databases through that regular server with your embedded server as a client using the “localhost:” syntax. This may seem contradictory to what has been said before about the absence of network support, but bear in mind that if you connect to localhost (or any other host), you are not using the embedded server; you’re only using the client part of fbembed.dll to connect to another server. You’ll have to provide a valid user name and password for this to work. 3.4.2. Embedded server deployment First, download the Embedded server kit from SourceForge. It’s typically named Firebird-n.n.n.xxxx_embed_win32.zip, with n.n.n.xxxx the Firebird version and build number. After unzipping, you’ll find the embedded server fbembed.dll in the root directory of the package, along with some other files. Additionally, there are three subdirectories doc, intl and udf. To make your application work with the embedded server: 1. Copy fbembed.dll into your application directory. Rename it to fbclient.dll or gds32.dll, depending on what your application expects as a Firebird client filename. Many applications still look for gds32.dll. Firebird command-line tools like isql and gbak — which you can also run against the embedded server — want fbclient.dll. You can also make copies with both names. 2. Also copy firebird.msg and ib_util.dll to your application directory. Copy aliases.conf if your application uses aliases to connect. The configuration file firebird.conf is only needed if you want to change the Firebird root directory; this will be discussed later. 3. For Firebird 2 or higher, copy the icu*.dll libraries too. 4. From the intl and udf directories, copy whatever your application or databases may need to same-named folders under your application directory. 5. Now if you run your application it will use the embedded server DLL to connect to any local database you desire, provided that the Windows user who runs the application has sufficient access rights to the database file(s) in question! Any combination of user name and password is accepted, as long as neither is an empty string (a space is OK though). The most important benefit of all this is that you can easily pack the Firebird Embedded files with your application and have them installed or unzipped at your users' computers without having to perform a separate Firebird install there (with all the additional worries of sufficient privileges, possible interference with already present Firebird or InterBase servers, etc. etc.). You can also have several embedded server versions (say 1.5.3 and 2.0) running on the same computer without the need for special configuration steps to keep them out of each other’s way. Please note that, even though the security database is bypassed, SQL privileges — specified in the database itself — still apply: if you connect as Firebird user ZIGGY and ZIGGY doesn’t have access to table STARDUST, you won’t get into that table. This is not as worrying as it seems, because you can connect as any user you like, including SYSDBA, with a dummy password. Placing the Firebird files elsewhere By default, Firebird Embedded Server considers the folder that fbembed.dll lives in (under whatever name) as the Firebird root directory. In the setup described above, this is your application directory. You may want to place the Firebird files somewhere else, so as not to clutter your application directory. To do this, you must tell the server where to look for them. Let's say you want the Firebird files to reside in a folder called D:\FbEmbedded: 1. Copy firebird.conf to your application directory and edit the RootDirectory parameter like this: ``` RootDirectory = D:\FbEmbedded ``` Alternatively, you may set the FIREBIRD environment variable to achieve the same. If both the configuration file parameter and the FIREBIRD envar are present, the latter takes precedence. The FIREBIRD environment variable will be picked up by every Firebird server on the system (embedded or not) at startup, and override their registry settings and configuration file parameters. So use it with care, if at all. 2. Copy/move the following items to D:\FbEmbedded: - firebird.msg - aliases.conf (if used) - the intl and udf subdirs plus contents (in so far as needed) 3. The following files however must be left in your application’s directory: - Any and all copies/renames of fbembed.dll - firebird.conf - ib_util.dll - for Firebird 2 and up: the icu*.dll libraries 3.4.3. Embedded server on Linux? The Linux Classic server comes with a client library called libfbembed.so which is used for local connections. This library contains a full Firebird engine, which explains why it's so much bigger than the Superserver client library. Local connections through this library are part of the user application process, not of a separate server process. Therefore, the user of the library must have filesystem-level access rights to the database — just as with Windows Embedded. So yes, this is a true embedded server. There are also some differences. First, Linux Classic doesn't require an exclusive lock on the databases it opens. The database remains accessible from other clients. A further — very important — difference is that Linux Classic validates every login against the security database: no connections with phony user names or passwords here! Finally, you can't just ship libfbembed.so with your application and use it to connect to local databases. Under Linux, you always need a properly installed server, be it Classic or Super. 3.5. Application development Once a database has been created and populated, its information can be accessed through a client application. Some applications — such as the Firebird `isql` tool, EMS SQL Manager, IB_SQL, IBAccess, Database Workbench, FlameRobin and _IBOConsole) — provide the capability to query data interactively and to create new metadata. Any application developed as a user interface to one or more Firebird databases will use the SQL query language, both to define the sets of data that can be stored and to pass requests to the server about rows it wants to update, insert into or delete from. SQL statements also convey the values which the application wants to be applied to those rows. Firebird implements a set of SQL syntaxes which have a high degree of compliance with the recognised SQL-92 and SQL-99 standards. The Firebird API provides complete structures for packaging SQL statements and the associated parameters, as well as for receiving the results. 3.5.1. Embedded SQL in Firebird applications Firebird provides the capability to embed SQL statements in applications written in C/C++ and some other programming languages. The code is then passed through `gpre`, the pre-processor, which substitutes the embedded SQL statements with equivalent host language code that calls functions in Firebird’s client API library. The `gpre` pre-processor generates a file that the host language compiler can compile. A special, extra subset of SQL-like source commands is available for this style of application, which are pre-processed into internal macro calls to the API. Known as Embedded SQL (ESQL), it provides a simpler, high-level language syntax for the programmer, that `gpre` can interpret and re-code according to the more complex structure of the equivalent API calls. The InterBase Embedded SQL Guide (https://www.ibphoenix.com/downloads/60EmbedSQL.zip) provides extensive documentation on this subject. 3.5.2. Predefined vs. dynamic queries Some queries have to be run in exactly the same form every time they are needed. Queries like this are good candidates for embedding in the host language and pre-processing by `gpre`. The pre-processor turns them into API function calls, giving a somewhat better performance than SQL that has to be interpreted at runtime. But many applications need to build queries that are at least partially dependent on information provided by the user — freely entered in a text box, or selected from a list of options. This is called Dynamic SQL or DSQL; that is, SQL code whose form is not (or not exactly) known at design time. DSQL can be embedded and preprocessed too, but some additional requirements and restrictions apply. More on this — again — in the InterBase Embedded SQL Guide. Delphi and C++ data access components provide properties and methods to analyse and parse DSQL request statements and manage the results passed back. Applications that use ODBC or other generic interfaces always work with DSQL statements, even if the user doesn’t always see them. Query-by-example and other visual query tools for instance provide the user with a convenient, easy to use, and often “SQL-free” interface to extract, modify or delete data from the database. Yet the underlying code translates the user’s input into DSQL statements, which are subsequently passed to the ODBC (or other) layer. Component interfaces provide methods and properties for building and preparing SQL template statements, allowing you to use placeholders for value criteria. At run-time, the application supplies input parameters of the appropriate data type to complete the statement. Provision is made also for retrieving output parameters from statements that return results after they are executed. Of course the use of data access components isn’t limited to dynamic SQL. You can also store static SQL strings — known at design time — in them. 3.5.3. RAD environments and component suites With the rise of rapid application development (RAD) tools in the past decade, the encapsulation of the API functions in suites of components presents a variety of attractive application development options for Firebird developers. The Borland Database Engine (BDE) Borland markets “enterprise versions” of a number of integrated development tools — Delphi, Kylix, C++Builder, JBuilder and some older products — which can use the proprietary Borland Database Engine and native SQL Links InterBase drivers as a “black box” middleware layer to make InterBase and, latterly, Firebird databases behave like desktop databases. BDE version 5.2 and its associated InterBase driver, which first shipped with Delphi 6E, supports both Firebird and InterBase version 6, although it has known bugs affecting the Dialect 3 date and time data types. Because the BDE’s purpose is to surface a generic, database-vendor-independent, client-centered data access layer to the IDE tools, it flattens out the differences between a wide range of different database systems. Hence, it limits the capability of applications to exploit the best features of Firebird, particularly multiple transactions per connection, control of transaction aging and concurrency control. The BDE can be useful where you need to develop an application that might be used with a choice of back-ends, of which Firebird is only one. Be warned however that people have reported problems with Firebird database access via the BDE, and these are likely to increase in number and severity as Firebird continues to move further away from InterBase. SQLDirect SQLDirect is a shareware, lightweight replacement for the BDE. It supports Firebird at least up to and including version 1.5. Visit http://www.sqldirect-soft.com for more information and free trial versions. DBExpress and Datassnap DBExpress and Datassnap were introduced in later versions of the Borland tools to provide alternative generic interfaces to databases. They replace the BDE by moving its functionality into expanded native drivers for supported databases. Like the BDE, they do not support multiple concurrent transactions. They are of especial use where a data interface is required that is independent of the idiosyncrasies of different database management systems. The InterBase native drivers should provide adequate support for Firebird databases where optimising client/server performance is not high among the objectives. **Direct-to-API Components** In response to the shortcomings of the BDE, a number of component suites have become available for Delphi and Borland C++Builder that bypass the BDE layer completely and encapsulate the Firebird API directly: **IBObjects** IB Objects is a rich set of visual and non-visual database components that has been stable since 1997. It offers two BDE-free suites for data access; one compatible with the native Delphi and C++ Builder TDataSource and visual controls, the other completely independent of the Delphi data access architecture and supplied with its own visual controls. [http://www.ibobjects.com](http://www.ibobjects.com) **FIBPlus** FIBPlus is another well-known and stable suite for Delphi and BCB. Developed from Gregory Deatz’s FreeIBComponents suite, it offers a connectivity based on TDataset. It doesn’t include any visual components, but it can work perfectly together with the Borland visual database classes, as well as with third-party visual data-aware components. **ZeosLib** ZeosLib is a set of free, open-source database connectivity components for Delphi, FreePascal/Lazarus, Kylix and C++ Builder. It supports a number of database systems, including Firebird. Because the ZeosLib components are based on TDataset, you can use them together with the Borland visual database controls. [http://www.zeoslib.net/](http://www.zeoslib.net/) **IBX (InterBase Express)** IBX was also developed from the FreeIBComponents. Its TDataset-based data access components were purchased for developing as a proprietary product by Borland. Components encapsulating the new Services API were added. It was left unfinished in 1999. The IBX source code was opened under the InterBase Public License in 2000 and it continues to be developed as an open source project. Borland distributes versions of IBX with some Delphi, Kylix and C++ Builder versions. Since InterBase and Firebird are diverging more and more, and Borland has (quite understandably) no intention to keep IBX Firebird-compatible, you should probably *not* use it with Firebird versions 1.5 and higher (although most features will still be supported). UIB (Unified InterBase) This is a set of non-visual components for Delphi, BCB, Kylix, Lazarus and FPC, supporting Firebird, InterBase and Yaffil. A ready-to-use SQL monitor (Windows only) is also available: http://www.progdigy.com/modules.php?name=UIB The UIB components are also contained in the JEDI Visual Component Library (JVCL): http://homepages.borland.com/jedi/jvcl/ Both UIB and the JVCL are freely available open-source products. 3.5.4. Other connectivity components and drivers Microsoft “open connectivity” Java A pure Java Type 4 JDBC driver called Jaybird is developed within the Firebird project. Jaybird is compliant with both the new JCA standard for application server connections to enterprise information systems and the established JDBC standard. Firebird has abandoned re-implementation of the Borland InterClient and Interserver platform-independent client layers. .NET The Firebird ADO.NET Data Provider, developed as a Firebird subproject, re-implements the client API functions in C#. Consequently, .NET developers only need the data provider to talk to a Firebird server; there’s no need to install a regular Firebird client. Home page: https://firebirdsql.org/en/net-provider/. 3.5.5. API applications The Firebird client program supports two discrete application programming interface (API) modules. The most important is the core API, through which all database work is performed. A much smaller API module, the Services API, provides functions for accessing various command-line and other utilities from application programs. Developers can write high-level programming or script language applications that populate the data structures and call the API functions directly. The Firebird core API Programmers who want to use the core API have to write code for allocating and populating the data structures that provide the communication layer between the client library and the server. Interactive SQL clients, component interfaces and embedded SQL “hide” these structures from the programmer by encapsulating them in their own higher level interfaces. Writing code that calls the API functions directly can be more powerful and flexible, with the following benefits: - Memory allocation control - No precompiling necessary - Access to transaction handles and options - Full access to error messages **Core API function categories** Based on their operation targets, we can divide the API functions into the following categories: - Database connection control - Transaction control - Statement execution - Blob functions - Array functions - Security functions - Informational functions - Type conversions **The Services API** The opened InterBase 6.0 code from which Firebird was developed surfaced for the first time an application programming interface (API) providing a function call interface to certain server activities such as backup/restore, statistics and user management. Many of these calls provide programming interfaces to the code in the command-line tools. A few lower-level server functions are included as well, some of which overlap functions already available in the core API. Before Firebird 1.5, the Services API was only available with Firebird Superserver. Support for the entire Services API in Classic Server versions was completed in Firebird 1.5.1. Borland’s InterBase Express (IBX) components include a subset—known as the Service components—encapsulating access to services API calls from some versions of their Delphi, Kylix and C++Builder development environments. Be aware however that IBX does not officially support Firebird. The higher your Firebird version, the more chance of incompatibilities and errors if you use IBX. **IBPP** IBPP is a C++ interface to the Firebird API. It is a class library written in “pure” C++ and hence not dependent on any specific programming environment, component set or even OS. You can use it anywhere you want Firebird connectivity in your C++ programs. IBPP was created and is maintained independently of the Firebird project. It is available for free and comes with its own very liberal license. The IBPP home page is at http://www.ibpp.org. ### 3.6. Server-side programming Among Firebird’s powerful features for dynamic client/server application programming is its capability to precompile source code on the server, storing the object code right inside the database in most cases. Such procedures and functions are executed completely on the server, optionally returning values or data sets to the client application. Firebird provides three styles of server-side programming capability: stored procedures, triggers and user-defined functions (UDFs). #### 3.6.1. Stored procedures Firebird’s procedural language (PSQL) implements extensions to its SQL language, providing conditional logic, flow control structures, exception handling (both built-in and user-defined), local variables, an event mechanism and the capability to accept input arguments of almost any type supported by Firebird. It implements a powerful flow control structure for processing cursors which can output a dataset directly to client memory without the need to create temporary tables. Such procedures are called from the client with a `SELECT` statement and are known to developers as **selectable stored procedures**. Procedures that don’t return a dataset (although they may return result variables) are called **executable stored procedures**; they are called with `EXECUTE PROCEDURE`. Stored procedures can call other stored procedures and can be recursive. All stored procedure execution, including selection of data sets from procedures and embedded calls to other procedures, is under the control of the single transaction that calls it. Accordingly, the work of a stored procedure call will be cancelled totally if the client rolls back the transaction. #### 3.6.2. Triggers Triggers are special procedures created for specific tables, for automatic execution during the process of committing DML work to the server. Any table can have any number of triggers to be executed before or after inserts, updates and deletions. Execution order is determined by a position parameter in the trigger’s declaration. Triggers have some PSQL extensions not available to regular stored procedures or to dynamic SQL, most notably the context variables `OLD` and `NEW` which, when prefixed to a column identifier, provide references to the existing and requested new values of the column. Triggers can call stored procedures, but not other triggers. Work performed by triggers will be rolled back if the transaction that prompted them is rolled back. #### 3.6.3. PSQL limitations Stored procedures and triggers cannot start transactions, since they are under transaction control themselves. PSQL does not allow the execution of DDL (Data Definition Language) statements: it is strictly meant to operate on data, not on the structure of your database. Although you can circumvent this limitation with the `EXECUTE STATEMENT` syntax introduced in Firebird 1.5, it is generally considered unwise to do so. (Just because we give you a spade, it doesn’t mean that you have to dig your own grave.) ### 3.6.4. User-defined functions By design, in order to preserve its small footprint, Firebird comes with a very modest arsenal of internally-defined (native) data transformation functions. Developers can write their own very precise functions in familiar host-language code such as C/C++, Pascal or Object Pascal to accept arguments and return a single result. Once an external function — UDF — is declared to a database, it becomes available as a valid SQL function to applications, stored procedures and triggers. Firebird supplies two libraries of ready-to-use UDFs: `ib_udf` and `fbudf`. Firebird looks for UDF libraries in its own `UDF` subdirectory or in other directories specified in the Firebird configuration file. In Firebird 1.5 and upward this is done with the `UDFAccess` parameter; in earlier versions with `external_function_directory`. In versions prior to 1.5 the `fbudf` library is only available on Windows. ### 3.7. Multi-database applications Firebird applications can work with several databases at the same time through the client library — something that not all relational database systems allow. Tables from separate databases cannot be joined to return linked sets, but cursors can be used to combine information. If consistency across database boundaries is required, Firebird can manage output sets from querying multiple databases inside a single transaction. Firebird implements automatic two-phase commit when data changes occur, to ensure that changes cannot be committed in one database if changes in another database, within the same transaction context, are rolled back or lost through a network failure. ## Appendix A: Document history The exact file history is recorded in the firebird-documentation git repository; see https://github.com/FirebirdSQL/firebird-documentation ### Revision History <table> <thead> <tr> <th>Revision</th> <th>Date</th> <th>Author</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0.0</td> <td>2002</td> <td>IBP</td> <td>Written and published by IBPhoenix.</td> </tr> <tr> <td>1.0</td> <td>2005</td> <td>IBP</td> <td>Donated to Firebird Project by IBPhoenix.</td> </tr> <tr> <td>2.0</td> <td>12 Nov 2006</td> <td>PV</td> <td>Preface added.</td> </tr> </tbody> </table> **About Firebird:** Updated to 1.5 and 2.0. Rewrote/corrected/extended a number of sections. Introduced Embedded Server. Introduced aliases and (other) security features. Introduced nbbackup. Introduced Services Manager + API. Added rows Availability, Local access on Windows, Multiprocessor support, Services Manager + API, Guardian on Windows and Guardian on Linux to the Classic-Super table. Added same-named sections to the Classic-Super discussion following the table. Explained bulkiness of Linux Classic client. Added note on AMD processors. **About Clients and Servers:** Updated to 1.5 and 2.0. Rewrote/corrected/extended a number of sections. Added sections on client filenames, multiple servers and server filenames. Added major section on embedded servers. Introduced SQLDirect. Warned against use of IBX with higher Firebird versions. Introduced UIB and JVCL. Introduced .NET data provider. Introduced IBPP. Explained the concept of executable stored procedures. Added section on PSQL limitations. Document History added. License Notice added. <table> <thead> <tr> <th>2.0. 31 Dec 2006</th> <th>PV</th> <th>About this book :: More documentation: Changed text and hyperlink.</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td><strong>About Firebird:</strong> Changed title of memory requirements table.</td> </tr> <tr> <td></td> <td></td> <td><strong>About Clients and Servers:</strong> Added ZeosLib to Direct-to-API Components section.</td> </tr> <tr> <td></td> <td></td> <td>Gave all tables and figures an ID to make their URLs persistent across builds.</td> </tr> </tbody> </table> Revision History 2.0. 16 Jul 2007 PV About this book :: Summary of features: Gave table a “keep-together=auto” PI, necessary with FOP 0.93+ About Clients and Servers :: Client and server combined: Firebird Embedded Server: Added titleabbrev. Also corrected the first Note, which incorrectly stated that Fb2’s fbembed.dll can’t be used as a network client (altered 1st sentence, dropped last). Put “localhost:” in a quote in that same note. License notice: Updated copyright year. 2.0. 12 Jul 2020 M Conversion to AsciiDoc, minor copy-editing, fixed some (but not all) broken links Appendix B: License notice The contents of this Documentation are subject to the Public Documentation License Version 1.0 (the “License”); you may only use this Documentation if you comply with the terms of this License. Copies of the License are available at https://www.firebirdsql.org/pdfmanual/pdl.pdf (PDF) and https://www.firebirdsql.org/manual/pdl.html (HTML). The Original Documentation is titled *Using Firebird*. The Initial Writer of the Original Documentation is: IBPhoenix Editors. Contributor: Paul Vinkenoog - see document history. Portions created by Paul Vinkenoog are Copyright © 2006–2007. All Rights Reserved. Contributor contact: paul at vinkenoog dot nl.
{"Source-Url": "https://firebirdsql.org/file/documentation/pdf/en/firebirddocs/ufb/using-firebird.pdf", "len_cl100k_base": 15049, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 66284, "total-output-tokens": 16054, "length": "2e13", "weborganizer": {"__label__adult": 0.0002351999282836914, "__label__art_design": 0.00023555755615234375, "__label__crime_law": 0.0001506805419921875, "__label__education_jobs": 0.00048613548278808594, "__label__entertainment": 4.786252975463867e-05, "__label__fashion_beauty": 7.647275924682617e-05, "__label__finance_business": 0.000263214111328125, "__label__food_dining": 0.0001615285873413086, "__label__games": 0.0004596710205078125, "__label__hardware": 0.000637054443359375, "__label__health": 0.00012922286987304688, "__label__history": 0.0001360177993774414, "__label__home_hobbies": 6.157159805297852e-05, "__label__industrial": 0.00017952919006347656, "__label__literature": 0.00014829635620117188, "__label__politics": 9.5367431640625e-05, "__label__religion": 0.00022172927856445312, "__label__science_tech": 0.0029010772705078125, "__label__social_life": 4.2438507080078125e-05, "__label__software": 0.01479339599609375, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0001207590103149414, "__label__transportation": 0.00020170211791992188, "__label__travel": 0.00011366605758666992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73120, 0.02115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73120, 0.20205]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73120, 0.87945]], "google_gemma-3-12b-it_contains_pii": [[0, 100, false], [100, 3208, null], [3208, 3854, null], [3854, 5223, null], [5223, 7712, null], [7712, 9772, null], [9772, 14964, null], [14964, 17522, null], [17522, 18991, null], [18991, 20739, null], [20739, 22972, null], [22972, 27390, null], [27390, 29554, null], [29554, 31926, null], [31926, 34319, null], [34319, 36653, null], [36653, 38406, null], [38406, 40153, null], [40153, 41334, null], [41334, 43229, null], [43229, 45907, null], [45907, 49085, null], [49085, 51599, null], [51599, 54744, null], [54744, 57685, null], [57685, 60249, null], [60249, 62497, null], [62497, 64545, null], [64545, 67662, null], [67662, 69419, null], [69419, 71755, null], [71755, 72340, null], [72340, 73120, null]], "google_gemma-3-12b-it_is_public_document": [[0, 100, true], [100, 3208, null], [3208, 3854, null], [3854, 5223, null], [5223, 7712, null], [7712, 9772, null], [9772, 14964, null], [14964, 17522, null], [17522, 18991, null], [18991, 20739, null], [20739, 22972, null], [22972, 27390, null], [27390, 29554, null], [29554, 31926, null], [31926, 34319, null], [34319, 36653, null], [36653, 38406, null], [38406, 40153, null], [40153, 41334, null], [41334, 43229, null], [43229, 45907, null], [45907, 49085, null], [49085, 51599, null], [51599, 54744, null], [54744, 57685, null], [57685, 60249, null], [60249, 62497, null], [62497, 64545, null], [64545, 67662, null], [67662, 69419, null], [69419, 71755, null], [71755, 72340, null], [72340, 73120, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73120, null]], "pdf_page_numbers": [[0, 100, 1], [100, 3208, 2], [3208, 3854, 3], [3854, 5223, 4], [5223, 7712, 5], [7712, 9772, 6], [9772, 14964, 7], [14964, 17522, 8], [17522, 18991, 9], [18991, 20739, 10], [20739, 22972, 11], [22972, 27390, 12], [27390, 29554, 13], [29554, 31926, 14], [31926, 34319, 15], [34319, 36653, 16], [36653, 38406, 17], [38406, 40153, 18], [40153, 41334, 19], [41334, 43229, 20], [43229, 45907, 21], [45907, 49085, 22], [49085, 51599, 23], [51599, 54744, 24], [54744, 57685, 25], [57685, 60249, 26], [60249, 62497, 27], [62497, 64545, 28], [64545, 67662, 29], [67662, 69419, 30], [69419, 71755, 31], [71755, 72340, 32], [72340, 73120, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73120, 0.1687]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
0c999f3543263f2edc85bbb649ceadfdc376f4d6
Defining Recursive Functions in Isabelle/HOL Alexander Krauss Abstract This tutorial describes the use of the function package, which provides general recursive function definitions for Isabelle/HOL. We start with very simple examples and then gradually move on to more advanced topics such as manual termination proofs, nested recursion, partiality, tail recursion and congruence rules. 1 Introduction Starting from Isabelle 2007, new facilities for recursive function definitions [2] are available. They provide better support for general recursive definitions than previous packages. But despite all tool support, function definitions can sometimes be a difficult thing. This tutorial is an example-guided introduction to the practical use of the package and related tools. It should help you get started with defining functions quickly. For the more difficult definitions we will discuss what problems can arise, and how they can be solved. We assume that you have mastered the fundamentals of Isabelle/HOL and are able to write basic specifications and proofs. To start out with Isabelle in general, consult the Isabelle/HOL tutorial [4]. Structure of this tutorial. Section 2 introduces the syntax and basic operation of the fun command, which provides full automation with reasonable default behavior. The impatient reader can stop after that section, and consult the remaining sections only when needed. Section 3 introduces the more verbose function command which gives fine-grained control. This form should be used whenever the short form fails. After that we discuss more specialized issues: termination, mutual, nested and higher-order recursion, partiality, pattern matching and others. Some background. Following the LCF tradition, the package is realized as a definitional extension: Recursive definitions are internally transformed into a non-recursive form, such that the function can be defined using standard definition facilities. Then the recursive specification is derived from the primitive definition. This is a complex task, but it is fully automated and mostly transparent to the user. Definitional extensions are valuable because they are conservative by construction: The “new” concept of general wellfounded recursion is completely reduced to existing principles. The new function command, and its short form fun have mostly replaced the traditional recdef command \[5\]. They solve a few of technical issues around recdef, and allow definitions which were not previously possible. In addition there is also the partial_function command (see \[6\]) that supports the definition of partial and tail recursive functions. ## 2 Function Definitions for Dummies In most cases, defining a recursive function is just as simple as other definitions: ```haskell fun fib :: "nat ⇒ nat" where "fib 0 = 1" | "fib (Suc 0) = 1" | "fib (Suc (Suc n)) = fib n + fib (Suc n)" ``` The syntax is rather self-explanatory: We introduce a function by giving its name, its type, and a set of defining recursive equations. If we leave out the type, the most general type will be inferred, which can sometimes lead to surprises: Since both 1 and + are overloaded, we would end up with `fib :: nat ⇒ 'a::{one,plus}`. The function always terminates, since its argument gets smaller in every recursive call. Since HOL is a logic of total functions, termination is a fundamental requirement to prevent inconsistencies\(^1\). Isabelle tries to prove termination automatically when a definition is made. In §4, we will look at cases where this fails and see what to do then. ### 2.1 Pattern matching Like in functional programming, we can use pattern matching to define functions. At the moment we will only consider constructor patterns, which only consist of datatype constructors and variables. Furthermore, patterns must be linear, i.e. all variables on the left hand side of an equation must be distinct. In §7 we discuss more general pattern matching. If patterns overlap, the order of the equations is taken into account. The following function inserts a fixed element between any two elements of a list: ```haskell fun sep :: "'a ⇒ 'a list ⇒ 'a list" where "sep a (x#y#xs) = x # a # sep a (y # xs)" | "sep a xs = xs" ``` Overlapping patterns are interpreted as “increments” to what is already there: The second equation is only meant for the cases where the first one does not match. Consequently, Isabelle replaces it internally by the remaining cases, making the patterns disjoint: ```haskell thm sep.simps ``` \(^1\)From the “definition” \(f(n) = f(n) + 1\) we could prove \(0 = 1\) by subtracting \(f(n)\) on both sides. sep a (x # y # xs) = x # a # sep a (y # xs) sep a [] = [] sep a [v] = [v] The equations from function definitions are automatically used in simplification: lemma "sep 0 [1, 2, 3] = [1, 0, 2, 0, 3]" by simp 2.2 Induction Isabelle provides customized induction rules for recursive functions. These rules follow the recursive structure of the definition. Here is the rule sep.induct arising from the above definition of sep: \[ \forall a \ x \ y \ xs. \ ?P a (y # xs) \Rightarrow ?P a (x # y # xs) ; \ \forall a. \ ?P a [] ; \ \forall a \ v. \ ?P a [v] \] \[ \Rightarrow ?P ?a0.0 \ ?a1.0 \] We have a step case for list with at least two elements, and two base cases for the zero- and the one-element list. Here is a simple proof about sep and map lemma "map f (sep x ys) = sep (f x) (map f ys)" apply (induct x ys rule: sep.induct) We get three cases, like in the definition. 1. \[ \forall a \ x \ y \ xs. \ map f (sep a (y # xs)) = sep (f a) (map f (y # xs)) \] 2. \[ \forall a. \ map f (sep a []) = sep (f a) (map f []) \] 3. \[ \forall a \ v. \ map f (sep a [v]) = sep (f a) (map f [v]) \] apply auto done With the \texttt{fun} command, you can define about 80\% of the functions that occur in practice. The rest of this tutorial explains the remaining 20\%. 3 fun vs. function The \texttt{fun} command provides a convenient shorthand notation for simple function definitions. In this mode, Isabelle tries to solve all the necessary proof obligations automatically. If any proof fails, the definition is rejected. This can either mean that the definition is indeed faulty, or that the default proof procedures are just not smart enough (or rather: not designed) to handle the definition. By expanding the abbreviation to the more verbose \texttt{function} command, these proof obligations become visible and can be analyzed or solved manually. The expansion from \texttt{fun} to \texttt{function} is as follows: Some details have now become explicit: 1. The sequential option enables the preprocessing of pattern overlaps which we already saw. Without this option, the equations must already be disjoint and complete. The automatic completion only works with constructor patterns. 2. A function definition produces a proof obligation which expresses completeness and compatibility of patterns (we talk about this later). The combination of the methods pat_completeness and auto is used to solve this proof obligation. 3. A termination proof follows the definition, started by the termination command. This will be explained in §4. Whenever a fun command fails, it is usually a good idea to expand the syntax to the more verbose function form, to see what is actually going on. 4 Termination The method lexicographic_order is the default method for termination proofs. It can prove termination of a certain class of functions by searching for a suitable lexicographic combination of size measures. Of course, not all functions have such a simple termination argument. For them, we can specify the termination relation manually. 4.1 The relation method Consider the following function, which sums up natural numbers up to $N$, using a counter $i$: ``` function sum :: "nat => nat => nat" where "sum i N = (if i > N then 0 else i + sum (Suc i) N)" by pat_completeness auto ``` The lexicographic_order method fails on this example, because none of the arguments decreases in the recursive call, with respect to the standard size ordering. To prove termination manually, we must provide a custom wellfounded relation. The termination argument for sum is based on the fact that the difference between $i$ and $N$ gets smaller in every step, and that the recursion stops when $i$ is greater than $N$. Phrased differently, the expression $N + 1 - i$ always decreases. We can use this expression as a measure function suitable to prove termination. ```haskell termination sum apply (relation "measure (λ(i,N). N + 1 - i)") ``` The `termination` command sets up the termination goal for the specified function `sum`. If the function name is omitted, it implicitly refers to the last function definition. The `relation` method takes a relation of type `(a × a) set`, where `a` is the argument type of the function. If the function has multiple curried arguments, then these are packed together into a tuple, as it happened in the above example. The predefined function "`measure :: (a ⇒ nat) ⇒ (a × a) set`" constructs a wellfounded relation from a mapping into the natural numbers (a `measure function`). After the invocation of `relation`, we must prove that (a) the relation we supplied is wellfounded, and (b) that the arguments of recursive calls indeed decrease with respect to the relation: 1. `wf (measure (λ(i,N). N + 1 - i))` 2. `∀ i N. ¬ N < i ⇒ ((Suc i, N), i, N) ∈ measure (λ(i,N). N + 1 - i)` These goals are all solved by `auto`: ```haskell apply auto done ``` Let us complicate the function a little, by adding some more recursive calls: ```haskell function foo :: "nat ⇒ nat ⇒ nat" where "foo i N = (if i > N then (if N = 0 then 0 else foo 0 (N - 1)) else i + foo (Suc i) N)" by pat_completeness auto ``` When `i` has reached `N`, it starts at zero again and `N` is decremented. This corresponds to a nested loop where one index counts up and the other down. Termination can be proved using a lexicographic combination of two measures, namely the value of `N` and the above difference. The `measures` combinator generalizes `measure` by taking a list of measure functions. ```haskell termination by (relation "measures [λ(i,N). N, λ(i,N). N + 1 - i]") auto ``` 4.2 How `lexicographic_order` works To see how the automatic termination proofs work, let’s look at an example where it fails: ```haskell fun fails :: "nat ⇒ nat list ⇒ nat" where "fails a [] = a" | "fails a (x#xs) = fails (x + a) (x#xs)" ``` Isabelle responds with the following error: ``` For a detailed discussion of the termination prover, see [1] ``` 4 TERMINATION *** Unfinished subgoals: *** (a, 1, <): *** 1. ∫x. x = 0 *** (a, 1, <=): *** 1. False *** (a, 2, <): *** 1. False *** Calls: *** a) (a, x # xs) -->> (x + a, x # xs) *** Measures: *** 1) λx. size (fst x) *** 2) λx. size (snd x) *** Result matrix: *** 1 2 *** a: ? <= *** Could not find lexicographic termination order. *** At command "fun". The key to this error message is the matrix at the bottom. The rows of that matrix correspond to the different recursive calls (In our case, there is just one). The columns are the function’s arguments (expressed through different measure functions, which map the argument tuple to a natural number). The contents of the matrix summarize what is known about argument descents: The second argument has a weak descent (<=) at the recursive call, and for the first argument nothing could be proved, which is expressed by ?. In general, there are the values <, <= and ?. For the failed proof attempts, the unfinished subgoals are also printed. Looking at these will often point to a missing lemma. 4.3 The size_change method Some termination goals that are beyond the powers of lexicographic_order can be solved automatically by the more powerful size_change method, which uses a variant of the size-change principle, together with some other techniques. While the details are discussed elsewhere [3], here are a few typical situations where lexicographic_order has difficulties and size_change may be worth a try: - Arguments are permuted in a recursive call. - Several mutually recursive functions with multiple arguments. - Unusual control flow (e.g., when some recursive calls cannot occur in sequence). Loading the theory Multiset makes the size_change method a bit stronger: it can then use multiset orders internally. 4.4 Configuring simplification rules for termination proofs Since both lexicographic_order and size_change rely on the simplifier internally, there can sometimes be the need for adding additional simp rules to them. 5 MUTUAL RECURSION This can be done either as arguments to the methods themselves, or globally via the theorem attribute \texttt{termination\_simp}, which is useful in rare cases. 5 Mutual Recursion If two or more functions call one another mutually, they have to be defined in one step. Here are \texttt{even} and \texttt{odd}: \begin{verbatim} function even :: "nat ⇒ bool" and odd :: "nat ⇒ bool" where "even 0 = True" | "odd 0 = False" | "even (Suc n) = odd n" | "odd (Suc n) = even n" by pat_completeness auto \end{verbatim} To eliminate the mutual dependencies, Isabelle internally creates a single function operating on the sum type \texttt{nat + nat}. Then, \texttt{Functions.even} and \texttt{Functions.odd} are defined as projections. Consequently, termination has to be proved simultaneously for both functions, by specifying a measure on the sum type: \begin{verbatim} termination by (relation "measure (λx. case x of Inl n ⇒ n | Inr n ⇒ n)") auto \end{verbatim} We could also have used \texttt{lexicographic\_order}, which supports mutual recursive termination proofs to a certain extent. 5.1 Induction for mutual recursion When functions are mutually recursive, proving properties about them generally requires simultaneous induction. The induction rule \texttt{"even_odd.induct"} generated from the above definition reflects this. Let us prove something about \texttt{Functions.even} and \texttt{Functions.odd}: \begin{verbatim} lemma even_odd_mod2: "even n = (n mod 2 = 0)" "odd n = (n mod 2 = 1)" We apply simultaneous induction, specifying the induction variable for both goals, separated by \texttt{and}: apply (induct n and n rule: even_odd.induct) We get four subgoals, which correspond to the clauses in the definition of \texttt{Functions.even} and \texttt{Functions.odd}: 1. \texttt{Functions.even 0 = (0 mod 2 = 0)} 2. \texttt{Functions.odd 0 = (0 mod 2 = 1)} 3. \(\forall n. \texttt{Functions.odd n = (n mod 2 = 1)} \implies \texttt{Functions.even (Suc n) = (Suc n mod 2 = 0)}\) 4. \(\forall n. \texttt{Functions.even n = (n mod 2 = 0)} \implies \texttt{Functions.odd (Suc n) = (Suc n mod 2 = 1)}\) \end{verbatim} Simplification solves the first two goals, leaving us with two statements about the mod operation to prove: apply simp_all 1. \(\forall n. \text{Functions.odd } n = (n \mod 2 = \text{Suc } 0) \implies (n \mod 2 = \text{Suc } 0) = (\text{Suc } n \mod 2 = 0)\) 2. \(\forall n. \text{Functions.even } n = (n \mod 2 = 0) \implies (n \mod 2 = 0) = (\text{Suc } n \mod 2 = \text{Suc } 0)\) These can be handled by Isabelle’s arithmetic decision procedures. apply arith apply arith done In proofs like this, the simultaneous induction is really essential: Even if we are just interested in one of the results, the other one is necessary to strengthen the induction hypothesis. If we leave out the statement about \text{Functions.odd} and just write \text{True} instead, the same proof fails: lemma failed_attempt: "even n = (n \mod 2 = 0)" "True" apply (induct n rule: even_odd.induct) Now the third subgoal is a dead end, since we have no useful induction hypothesis available: 1. Functions.even 0 = (0 \mod 2 = 0) 2. True 3. \(\forall n. \text{True} \implies \text{Functions.even } (\text{Suc } n) = (\text{Suc } n \mod 2 = 0)\) 4. \(\forall n. \text{Functions.even } n = (n \mod 2 = 0) \implies \text{True}\) oops 6 Elimination A definition of function \(f\) gives rise to two kinds of elimination rules. Rule \(f\).cases simply describes case analysis according to the patterns used in the definition: \[ \text{fun list_to_option :: } 'a \text{ list } \Rightarrow 'a \text{ option} \\ \text{where} \\ "\text{list_to_option } [x] = \text{Some } x" \\ | "\text{list_to_option } _ = \text{None}" \] thm list_to_option.cases \[ [\forall x. ?x = [x] \implies ?P; ?x = [] \implies ?P; \forall v \, \forall v'. \, ?x = v \; \# v' \; \# v' \implies ?P] \implies ?P \] Note that this rule does not mention the function at all, but only describes the cases used for defining it. In contrast, the rule list_to_option.elims also tell us what the function value will be in each case: thm list_to_option.elims 7 General pattern matching 7.1 Avoiding automatic pattern splitting Up to now, we used pattern matching only on datatypes, and the patterns were always disjoint and complete, and if they weren’t, they were made disjoint automatically like in the definition of sep in §2.1. This automatic splitting can significantly increase the number of equations involved, and this is not always desirable. The following example shows the problem: Suppose we are modeling incomplete knowledge about the world by a three-valued datatype, which has values T, F and X for true, false and uncertain propositions, respectively. datatype P3 = T | F | X Then the conjunction of such values can be defined as follows: fun And :: "P3 ⇒ P3 ⇒ P3" where "And T p = p" | "And p T = p" | "And p F = F" | "And F p = F" This lets us eliminate an assumption of the form list_to_option xs = y and replace it with the two cases, e.g.: lemma "list_to_option xs = y ⇒ P" proof (erule list_to_option.elims) fix x assume "xs = [x]" "y = Some x" thus P sorry next assume "xs = []" "y = None" thus P sorry next fix a b xs' assume "xs = a # b # xs'" "y = None" thus P sorry qed Sometimes it is convenient to derive specialized versions of the elim rules above and keep them around as facts explicitly. For example, it is natural to show that if list_to_option xs = Some y, then xs must be a singleton. The command fun_cases derives such facts automatically, by instantiating and simplifying the general elimination rules given some pattern: fun_cases list_to_option_SomeE[elim]: "list_to_option xs = Some y" thm list_to_option_SomeE This definition is useful, because the equations can directly be used as simplification rules. But the patterns overlap: For example, the expression \( \text{And} \ T \) is matched by both the first and the second equation. By default, Isabelle makes the patterns disjoint by splitting them up, producing instances: \[ \text{thm And.simps} \] \begin{verbatim} And T ?p = ?p And F T = F And X T = X And F F = F And X F = F And F X = F And X X = X \end{verbatim} There are several problems with this: 1. If the datatype has many constructors, there can be an explosion of equations. For \( \text{And} \), we get seven instead of five equations, which can be tolerated, but this is just a small example. 2. Since splitting makes the equations “less general”, they do not always match in rewriting. While the term \( \text{And} \ x \ F \) can be simplified to \( F \) with the original equations, a (manual) case split on \( x \) is now necessary. 3. The splitting also concerns the induction rule "\text{And.induct}". Instead of five premises it now has seven, which means that our induction proofs will have more cases. 4. In general, it increases clarity if we get the same definition back which we put in. If we do not want the automatic splitting, we can switch it off by leaving out the \texttt{sequential} option. However, we will have to prove that our pattern matching is consistent: \[ \text{function And2 ::= "P3 \Rightarrow P3 \Rightarrow P3"} \] \[ \text{where} \] \[ \text{"And2 T p = p"} \] \[ \text{! "And2 p T = p"} \] \[ \text{! "And2 p F = F"} \] \[ \text{! "And2 F p = F"} \] \[ \text{! "And2 X X = X"} \] Now let’s look at the proof obligations generated by a function definition. In this case, they are: \[ 1. \quad \forall p \ x. \left[ (\forall p. \ x = (T, p) \Rightarrow P); \ (\forall p. \ x = (p, T) \Rightarrow P); \ (\forall p. \ x = (p, F) \Rightarrow P) \Rightarrow P \right] \] \[ \frac{(\forall p. \ x = (F, p) \Rightarrow P); \ (x = (X, X) \Rightarrow P)}{\Rightarrow P} \] \footnote{This prevents us from defining something like \( f \ x = \text{True} \) and \( f \ x = \text{False} \) simultaneously.} 2. \( \forall p \ pa. \ (T, p) = (T, pa) \Rightarrow p = pa \) 3. \( \forall p \ pa. \ (T, p) = (pa, T) \Rightarrow p = pa \) 4. \( \forall p \ pa. \ (T, p) = (pa, F) \Rightarrow p = F \) 5. \( \forall p \ pa. \ (T, p) = (F, pa) \Rightarrow p = F \) 6. \( \forall p. \ (T, p) = (X, X) \Rightarrow p = X \) 7. \( \forall p \ pa. \ (p, T) = (pa, T) \Rightarrow p = pa \) 8. \( \forall p \ pa. \ (p, T) = (pa, F) \Rightarrow p = F \) 9. \( \forall p \ pa. \ (p, T) = (F, pa) \Rightarrow p = F \) 10. \( \forall p. \ (p, T) = (X, X) \Rightarrow p = X \) A total of 16 subgoals... The first subgoal expresses the completeness of the patterns. It has the form of an elimination rule and states that every \( x \) of the function's input type must match at least one of the patterns\(^4\). If the patterns just involve datatypes, we can solve it with the `pat_completeness` method: ```isar apply pat_completeness ``` The remaining subgoals express pattern compatibility. We do allow that an input value matches multiple patterns, but in this case, the result (i.e. the right hand sides of the equations) must also be equal. For each pair of two patterns, there is one such subgoal. Usually this needs injectivity of the constructors, which is used automatically by `auto`. ```isar by auto termination by (relation "{}") simp ``` ### 7.2 Non-constructor patterns Most of Isabelle's basic types take the form of inductive datatypes, and usually pattern matching works on the constructors of such types. However, this need not be always the case, and the `function` command handles other kind of patterns, too. One well-known instance of non-constructor patterns are so-called \( n + k \)-patterns, which are a little controversial in the functional programming world. Here is the initial fibonacci example with \( n + k \)-patterns: ```isar function fib2 :: "nat ⇒ nat" where "fib2 0 = 1" | "fib2 1 = 1" | "fib2 (n + 2) = fib2 n + fib2 (Suc n)" ``` This kind of matching is again justified by the proof of pattern completeness and compatibility. The proof obligation for pattern completeness states that every natural number is either 0, 1 or \( n + 2 \): ```isar \[ \forall x. \ [x = 0 \Rightarrow P; x = 1 \Rightarrow P; \ \exists n. x = n + 2 \Rightarrow P] \Rightarrow P \] A total of 7 subgoals... ``` \(^4\)Completeness could be equivalently stated as a disjunction of existential statements: \( (\exists p. \ x = (T, p)) \lor (\exists p. \ x = (p, T)) \lor (\exists p. \ x = (p, F)) \lor (\exists p. \ x = (F, p)) \lor x = (X, X) \), and you can use the method `atomize_elim` to get that form instead. This is an arithmetic triviality, but unfortunately the arith method cannot handle this specific form of an elimination rule. However, we can use the method atomize_elim to do an ad-hoc conversion to a disjunction of existentials, which can then be solved by the arithmetic decision procedure. Pattern compatibility and termination are automatic as usual. apply atomize_elim apply arith apply auto done termination by lexicographic_order We can stretch the notion of pattern matching even more. The following function is not a sensible functional program, but a perfectly valid mathematical definition: function ev :: "nat ⇒ bool" where "ev (2 * n) = True" | "ev (2 * n + 1) = False" apply atomize_elim by arith+ termination by (relation "{\}") simp This general notion of pattern matching gives you a certain freedom in writing down specifications. However, as always, such freedom should be used with care: If we leave the area of constructor patterns, we have effectively departed from the world of functional programming. This means that it is no longer possible to use the code generator, and expect it to generate ML code for our definitions. Also, such a specification might not work very well together with simplification. Your mileage may vary. 7.3 Conditional equations The function package also supports conditional equations, which are similar to guards in a language like Haskell. Here is Euclid’s algorithm written with conditional patterns: function gcd :: "nat ⇒ nat ⇒ nat" where "gcd x 0 = x" | "gcd 0 y = y" | "x < y ⇒ gcd (Suc x) (Suc y) = gcd (Suc x) (y - x)" | "¬ x < y ⇒ gcd (Suc x) (Suc y) = gcd (x - y) (Suc y)" by (atomize_elim, auto, arith) termination by lexicographic_order By now, you can probably guess what the proof obligations for the pattern completeness and compatibility look like. Again, functions with conditional patterns are not supported by the code generator. \textsuperscript{5}Note that the patterns are also overlapping in the base case 7.4 Pattern matching on strings As strings (as lists of characters) are normal datatypes, pattern matching on them is possible, but somewhat problematic. Consider the following definition: fun check :: "string ⇒ bool" where "check (''good'') = True" | "check s = False" An invocation of the above fun command does not terminate. What is the problem? Strings are lists of characters, and characters are a datatype with a lot of constructors. Splitting the catch-all pattern thus leads to an explosion of cases, which cannot be handled by Isabelle. There are two things we can do here. Either we write an explicit if on the right hand side, or we can use conditional patterns: function check :: "string ⇒ bool" where "check (''good'') = True" | s ≠ ''good'’ ⇒ check s = False" by auto termination by (relation "{}") simp 8 Partiality In HOL, all functions are total. A function f applied to x always has the value f x, and there is no notion of undefinedness. This is why we have to do termination proofs when defining functions: The proof justifies that the function can be defined by wellfounded recursion. However, the function package does support partiality to a certain extent. Let’s look at the following function which looks for a zero of a given function f. function findzero :: "nat ⇒ nat ⇒ nat" where "findzero f n = (if f n = 0 then n else findzero f (Suc n))" by pat_completeness auto Clearly, any attempt of a termination proof must fail. And without that, we do not get the usual rules findzero.simps and findzero.induct. So what was the definition good for at all? 8.1 Domain predicates The trick is that Isabelle has not only defined the function findzero, but also a predicate findzero_dom that characterizes the values where the function terminates: the domain of the function. If we treat a partial function just as a total function with an additional domain predicate, we can derive simplification and induction rules as we do for total functions. They are guarded by domain conditions and are called psimps and pinduct: findzero_dom (?f, ?n) ⇒ findzero ?f ?n = (if ?f ?n = 0 then ?n else findzero ?f (Suc ?n)) ("findzero.psimps") (Suc ?n)" Remember that all we are doing here is use some tricks to make a total function appear as if it was partial. We can still write the term \( \text{findzero} (\lambda x. 1) 0 \) and like any other term of type \( \text{nat} \) it is equal to some natural number, although we might not be able to find out which one. The function is underdefined. But it is defined enough to prove something interesting about it. We can prove that if \( \text{findzero} f n \) terminates, it indeed returns a zero of \( f \): \[ \text{lemma findzero_zero: } \text{findzero_dom} (f, n) \Rightarrow f (\text{findzero} f n) = 0 \] We apply induction as usual, but using the partial induction rule: \[ \text{apply (induct f n rule: findzero.pinduct)} \] This gives the following subgoals: 1. \( \forall f n. \text{findzero_dom} (f, n); f n \neq 0 \Rightarrow f (\text{findzero} f (\text{Suc} n)) = 0 \Rightarrow f (\text{findzero} f n) = 0 \) The hypothesis in our lemma was used to satisfy the first premise in the induction rule. However, we also get \( \text{findzero_dom} (f, n) \) as a local assumption in the induction step. This allows unfolding \( \text{findzero} f n \) using the \text{psimps} rule, and the rest is trivial. \[ \text{apply (simp add: findzero.psimps)} \] Proofs about partial functions are often not harder than for total functions. Fig. 1 shows a slightly more complicated proof written in Isar. It is verbose enough to show how partiality comes into play: From the partial induction, we get an additional domain condition hypothesis. Observe how this condition is applied when calls to \( \text{findzero} \) are unfolded. ### 8.2 Partial termination proofs Now that we have proved some interesting properties about our function, we should turn to the domain predicate and see if it is actually true for some values. Otherwise we would have just proved lemmas with \( \text{False} \) as a premise. Essentially, we need some introduction rules for \( \text{findzero_dom} \). The function package can prove such domain introduction rules automatically. But since they are not used very often (they are almost never needed if the function is total), this functionality is disabled by default for efficiency reasons. So we have to go back and ask for them explicitly by passing the \text{domintros} option to the function package: \[ \text{function (domintros) findzero :: } "(\text{nat} \Rightarrow \text{nat}) \Rightarrow \text{nat} \Rightarrow \text{nat}" \] Now the package has proved an introduction rule for \( \text{findzero_dom} \): \[ \text{thm findzero.domintros} \] lemma \( \text{findzero-dom} (f, n); x \in \{ n ..< \text{findzero } f \ n \} \) \( \Rightarrow \) \( f \ x \neq 0 \) proof (induct rule: findzero.pinduct) fix \( f \ n \) assume dom: \( \text{findzero-dom} (f, n) \) and IH: \( [f \ n \neq 0; x \in \{ \text{Suc } n ..< \text{findzero } f \ (\text{Suc } n) \}] \) \( \Rightarrow \) \( f \ x \neq 0 \) and x-range: \( x \in \{ n ..< \text{findzero } f \ n \} \) have \( f \ n \neq 0 \) proof assume \( f \ n = 0 \) with dom have \( \text{findzero } f \ n = n \) by (simp add: findzero.psimps) with x-range show False by auto qed from x-range have \( x = n \lor x \in \{ \text{Suc } n ..< \text{findzero } f \ n \} \) by auto thus \( f \ x \neq 0 \) proof assume \( x = n \) with \( f \ n \neq 0 \) show ?thesis by simp next assume \( x \in \{ \text{Suc } n ..< \text{findzero } f \ n \} \) with dom and \( f \ n \neq 0 \) have \( x \in \{ \text{Suc } n ..< \text{findzero } f \ (\text{Suc } n) \} \) by (simp add: findzero.psimps) with IH and \( f \ n \neq 0 \) show ?thesis by simp qed qed Figure 1: A proof about a partial function lemma findzero-termination: assumes $x \geq n$ and $f x = 0$ shows $\text{findzero-dom}(f, n)$ proof have base: $\text{findzero-dom}(f, x)$ by (rule findzero.domintros) (simp add: $f x = 0$) have step: $\forall i. \text{findzero-dom}(f, \text{Suc} \ i)$$\implies \text{findzero-dom}(f, i)$ by (rule findzero.domintros) simp from ($x \geq n$) show ?thesis proof (induct rule:inc-induct) show $\text{findzero-dom}(f, x)$ by (rule base) next fix $i$ assume $\text{findzero-dom}(f, \text{Suc} \ i)$ thus $\text{findzero-dom}(f, i)$ by (rule step) qed Figure 2: Termination proof for findzero $(0 < ?f \ ?n \implies \text{findzero-dom}(?f, \text{Suc} \ ?n)) \implies \text{findzero-dom}(?f, \ ?n)$ Domain introduction rules allow to show that a given value lies in the domain of a function, if the arguments of all recursive calls are in the domain as well. They allow to do a “single step” in a termination proof. Usually, you want to combine them with a suitable induction principle. Since our function increases its argument at recursive calls, we need an induc- tion principle which works “backwards”. We will use inc_induct, which allows to do induction from a fixed number “downwards”: $[\forall i \leq ?j; ?P ?j; \forall n. [\forall i \leq n; n < ?j; ?P (\text{Suc} \ n)] \implies ?P n] \implies ?P ?i$ ("inc_induct") Figure 2 gives a detailed Isar proof of the fact that findzero terminates if there is a zero which is greater or equal to $n$. First we derive two useful rules which will solve the base case and the step case of the induction. The induction is then straightforward, except for the unusual induction principle. Again, the proof given in Fig. 2 has a lot of detail in order to explain the principles. Using more automation, we can also have a short proof: lemma findzero_termination_short: assumes zero: "$x \geq n$" assumes [simp]: "$f x = 0$" shows "$\text{findzero-dom}(f, n)$" using zero by (induct rule:inc_induct) (auto intro: findzero.domintros) It is simple to combine the partial correctness result with the termination lemma: lemma findzero_total_correctness: "$f x = 0 \implies f (\text{findzero } f \ 0) = 0$" by (blast intro: findzero_zero findzero_termination) 8.3 Definition of the domain predicate Sometimes it is useful to know what the definition of the domain predicate looks like. Actually, \texttt{findzero_dom} is just an abbreviation: \[ \texttt{findzero_dom} \equiv \text{Wellfounded.accp \ \texttt{findzero_rel}} \] The domain predicate is the accessible part of a relation \texttt{findzero_rel}, which was also created internally by the function package. \texttt{findzero_rel} is just a normal inductive predicate, so we can inspect its definition by looking at the introduction rules \texttt{findzero_rel.intros}. In our case there is just a single rule: \[ ?f \ ?n \neq 0 \rightarrow \text{\texttt{findzero_rel}} (\ ?f, \ \text{Suc} \ ?n) \ (\ ?f, \ ?n) \] The predicate \texttt{findzero_rel} describes the recursion relation of the function definition. The recursion relation is a binary relation on the arguments of the function that relates each argument to its recursive calls. In general, there is one introduction rule for each recursive call. The predicate \texttt{findzero_dom} is the accessible part of that relation. An argument belongs to the accessible part, if it can be reached in a finite number of steps (cf. its definition in \texttt{Wellfounded.thy}). Since the domain predicate is just an abbreviation, you can use lemmas for \texttt{Wellfounded.accp} and \texttt{findzero_rel} directly. Some lemmas which are occasionally useful are \texttt{accpI}, \texttt{accp_downward}, and of course the introduction and elimination rules for the recursion relation \texttt{"findzero_rel.intros"} and \texttt{"findzero_rel.cases"}. 9 Nested recursion Recursive calls which are nested in one another frequently cause complications, since their termination proof can depend on a partial correctness property of the function itself. As a small example, we define the “nested zero” function: \[ \begin{align*} \text{function } \texttt{nz} :: \text{"nat } \Rightarrow \text{nat"} \\ \text{where } \\ \quad \texttt{"nz 0 = 0"} \\ \quad \texttt{"nz (Suc n) = nz (nz n)"} \\ \text{by pat_completeness auto} \end{align*} \] If we attempt to prove termination using the identity measure on naturals, this fails: \[ \begin{align*} \text{termination} \\ \text{apply (relation "measure (\lambda n. n)")} \\ \text{apply auto} \end{align*} \] We get stuck with the subgoal \[ \forall n. \ \texttt{nz_dom n \rightarrow nz n < Suc n} \] Of course this statement is true, since we know that \texttt{nz} is the zero function. And in fact we have no problem proving this property by induction. \[ \begin{align*} \text{lemma } \texttt{nz_is_zero: "nz_dom n \rightarrow nz n = 0"} \end{align*} \] function $f_{91} :: \text{nat} \Rightarrow \text{nat}$ where $f_{91} n = (\text{if } 100 < n \text{ then } n - 10 \text{ else } f_{91} (f_{91} (n + 11)))$ by pat-completeness auto lemma $f_{91}$-estimate: assumes $trm : f_{91}$-dom $n$ shows $n < f_{91} n + 11$ using $trm$ by induct (auto simp: $f_{91}$-psimps) termination proof let $?R = \text{measure } (\lambda x. 101 - x)$ show wof $?R$ .. fix $n :: \text{nat}$ assume $\neg 100 < n$ — Assumptions for both calls thus $(n + 11, n) \in ?R$ by simp — Inner call assume inner-trm: $f_{91}$-dom $(n + 11)$ — Outer call with $f_{91}$-estimate have $n + 11 < f_{91} (n + 11) + 11$. with $(\neg 100 < n)$ show $(f_{91} (n + 11), n) \in ?R$ by simp qed Figure 3: McCarthy’s 91-function by (induct rule:nz.pinduct) (auto simp: nz.psimps) We formulate this as a partial correctness lemma with the condition nz_dom $n$. This allows us to prove it with the pinduct rule before we have proved termination. With this lemma, the termination proof works as expected: termination by (relation "measure (\lambda n. n") (auto simp: nz_is_zero) As a general strategy, one should prove the statements needed for termination as a partial property first. Then they can be used to do the termination proof. This also works for less trivial examples. Figure 3 defines the 91-function, a well-known challenge problem due to John McCarthy, and proves its termination. 10 Higher-Order Recursion Higher-order recursion occurs when recursive calls are passed as arguments to higher-order combinators such as map, filter etc. As an example, imagine a datatype of n-ary trees: ``` datatype 'a tree = Leaf 'a | Branch "'a tree list" ``` We can define a function which swaps the left and right subtrees recursively, using the list functions rev and map: fun mirror :: "'a tree ⇒ 'a tree" where "mirror (Leaf n) = Leaf n" | "mirror (Branch l) = Branch (rev (map mirror l))" Although the definition is accepted without problems, let us look at the termination proof: termination proof As usual, we have to give a wellfounded relation, such that the arguments of the recursive calls get smaller. But what exactly are the arguments of the recursive calls when mirror is given as an argument to map? Isabelle gives us the subgoals 1. wf ?R 2. \( \forall l \ x. x \in \text{set } l \implies (x, \text{Branch } l) \in ?R \) So the system seems to know that map only applies the recursive call mirror to elements of l, which is essential for the termination proof. This knowledge about map is encoded in so-called congruence rules, which are special theorems known to the function command. The rule for map is \[ \begin{align*} ?xs &= ?ys; \\ \forall x. x \in \text{set } ?ys &\implies ?f x = ?g x \implies \text{map } ?f ?xs = \text{map } ?g ?ys \end{align*} \] You can read this in the following way: Two applications of map are equal, if the list arguments are equal and the functions coincide on the elements of the list. This means that for the value map f l we only have to know how f behaves on the elements of l. Usually, one such congruence rule is needed for each higher-order construct that is used when defining new functions. In fact, even basic functions like If and Let are handled by this mechanism. The congruence rule for If states that the then branch is only relevant if the condition is true, and the else branch only if it is false: \[ \begin{align*} &\implies (\text{if } ?b \text{ then } ?x \text{ else } ?y) = (\text{if } ?c \text{ then } ?u \text{ else } ?v) \end{align*} \] Congruence rules can be added to the function package by giving them the fundef_cong attribute. The constructs that are predefined in Isabelle, usually come with the respective congruence rules. But if you define your own higher-order functions, you may have to state and prove the required congruence rules yourself, if you want to use your functions in recursive definitions. 10.1 Congruence Rules and Evaluation Order Higher order logic differs from functional programming languages in that it has no built-in notion of evaluation order. A program is just a set of equations, and it is not specified how they must be evaluated. However for the purpose of function definition, we must talk about evaluation order implicitly, when we reason about termination. Congruence rules express that a certain evaluation order is consistent with the logical definition. Consider the following function. ```plaintext function f :: "nat ⇒ bool" where "f n = (n = 0 ∨ f (n - 1))" ``` For this definition, the termination proof fails. The default configuration specifies no congruence rule for disjunction. We have to add a congruence rule that specifies left-to-right evaluation order: ``` ``` Now the definition works without problems. Note how the termination proof depends on the extra condition that we get from the congruence rule. However, as evaluation is not a hard-wired concept, we could just turn everything around by declaring a different congruence rule. Then we can make the reverse definition: ```plaintext lemma disj_cong2[fundef_cong]: "¬ Q' ===> P = P'') ===> (Q = Q') ===> (P ∨ Q) = (P' ∨ Q')" by blast ``` ```plaintext fun f' :: "nat ⇒ bool" where "f' n = (f' (n - 1) ∨ n = 0)" ``` These examples show that, in general, there is no “best” set of congruence rules. However, such tweaking should rarely be necessary in practice, as most of the time, the default set of congruence rules works well. References
{"Source-Url": "https://isabelle.in.tum.de/dist/Isabelle2021-1/doc/functions.pdf", "len_cl100k_base": 11163, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 55283, "total-output-tokens": 13092, "length": "2e13", "weborganizer": {"__label__adult": 0.00032448768615722656, "__label__art_design": 0.0004634857177734375, "__label__crime_law": 0.0004496574401855469, "__label__education_jobs": 0.000995635986328125, "__label__entertainment": 0.00010287761688232422, "__label__fashion_beauty": 0.0001499652862548828, "__label__finance_business": 0.00018608570098876953, "__label__food_dining": 0.0004494190216064453, "__label__games": 0.0009546279907226562, "__label__hardware": 0.000640869140625, "__label__health": 0.00044083595275878906, "__label__history": 0.00026988983154296875, "__label__home_hobbies": 0.0001156926155090332, "__label__industrial": 0.0004820823669433594, "__label__literature": 0.0003643035888671875, "__label__politics": 0.000324249267578125, "__label__religion": 0.0005297660827636719, "__label__science_tech": 0.04541015625, "__label__social_life": 0.00011211633682250977, "__label__software": 0.00899505615234375, "__label__software_dev": 0.9375, "__label__sports_fitness": 0.00026607513427734375, "__label__transportation": 0.0004360675811767578, "__label__travel": 0.00018644332885742188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42948, 0.0148]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42948, 0.86213]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42948, 0.84754]], "google_gemma-3-12b-it_contains_pii": [[0, 2301, false], [2301, 4655, null], [4655, 6581, null], [6581, 8445, null], [8445, 10657, null], [10657, 12658, null], [12658, 14835, null], [14835, 16848, null], [16848, 18522, null], [18522, 20676, null], [20676, 23291, null], [23291, 25288, null], [25288, 27467, null], [27467, 30058, null], [30058, 31153, null], [31153, 33387, null], [33387, 36039, null], [36039, 37852, null], [37852, 40530, null], [40530, 42948, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2301, true], [2301, 4655, null], [4655, 6581, null], [6581, 8445, null], [8445, 10657, null], [10657, 12658, null], [12658, 14835, null], [14835, 16848, null], [16848, 18522, null], [18522, 20676, null], [20676, 23291, null], [23291, 25288, null], [25288, 27467, null], [27467, 30058, null], [30058, 31153, null], [31153, 33387, null], [33387, 36039, null], [36039, 37852, null], [37852, 40530, null], [40530, 42948, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42948, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42948, null]], "pdf_page_numbers": [[0, 2301, 1], [2301, 4655, 2], [4655, 6581, 3], [6581, 8445, 4], [8445, 10657, 5], [10657, 12658, 6], [12658, 14835, 7], [14835, 16848, 8], [16848, 18522, 9], [18522, 20676, 10], [20676, 23291, 11], [23291, 25288, 12], [25288, 27467, 13], [27467, 30058, 14], [30058, 31153, 15], [31153, 33387, 16], [33387, 36039, 17], [36039, 37852, 18], [37852, 40530, 19], [40530, 42948, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42948, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
b255bb24124a1ffeeded2b1bb8b4d1a8a72bde8a
Work Practices and Challenges in Pull-Based Development: The Integrator’s Perspective Georgios Gousios⁎, Andy Zaidman†, Margaret-Anne Storey‡, Arie van Deursen† ⁎ Radboud University Nijmegen, the Netherlands Email: g.gousios@cs.ru.nl † Delft University of Technology, the Netherlands Email: {a.e.zaidman, arie.vandeursen}@tudelft.nl ‡ University of Victoria, BC, Canada Email: mstorey@uvic.ca Abstract—In the pull-based development model, the integrator has the crucial role of managing and integrating contributions. This work focuses on the role of the integrator and investigates working habits and challenges alike. We set up an exploratory qualitative study involving a large-scale survey of 749 integrators, to which we add quantitative data from the integrator’s project. Our results provide insights into the factors they consider in their decision making process to accept or reject a contribution. Our key findings are that integrators struggle to maintain the quality of their projects and have difficulties with prioritizing contributions that are to be merged. Our insights have implications for practitioners who wish to use or improve their pull-based development process, as well as for researchers striving to understand the theoretical implications of the pull-based model in software development. I. INTRODUCTION Pull-based development as a distributed development model is a distinct way of collaborating in software development. In this model, the project’s main repository is not shared among potential contributors; instead, contributors fork (clone) the repository and make their changes independent of each other. When a set of changes is ready to be submitted to the main repository, they create a pull request, which specifies a local branch to be merged with a branch in the main repository. A member of the project’s core team (from hereon, the integrator1) is responsible to inspect the changes and integrate them into the project’s main development line. The role of the integrator is crucial. The integrator must act as a guardian for the project’s quality while at the same time keeping several (often, more than ten) contributions “in-flight” through communicating modification requirements to the original contributors. Being a part of a development team, the integrator must facilitate consensus-reaching discussions and timely evaluation of the contributions. In Open Source Software (OSS) projects, the integrator is additionally taxed with enforcing an online discussion etiquette and ensuring the project’s longevity by on-boarding new contributors. The pull-based development process is quickly becoming a widely used model for distributed software development [1]. On GitHub alone, it is currently being used exclusively or complementarily to the shared repository model in almost half of the collaborative projects. With GitHub hosting more than 1 million collaborative projects and competing services, such as BitBucket and Gitorious, offering similar implementations of the pull-based model, we expect the pull-based development model to become the default model for distributed software development in the years to come. By better understanding the work practices and the challenges that integrators face while working in pull-based settings, we can inform the design of better tools to support their work and come up with best practices to facilitate efficient collaboration. To do so, we set up an exploratory qualitative investigation and survey integrators on how they use the pull-based development model in their projects. Our field of study is GitHub; using our GHTorrent database [2], we aimed our survey at integrators from high profile and high volume projects. An explicit goal is to learn from many projects rather than study a few projects in depth. We therefore use surveys as our main research instrument, generously sprinkled with open-ended questions. We motivate our survey questions based on a rigorous analysis of the existing literature and our own experience with working with and analysing the pull-based model during the last 2 years. We conducted a two-round (pilot and main) survey with 21 and 749 respondents respectively. Our main findings reveal that integrators successfully use pull requests to solicit external contributions and we provide insights into the decision making process that integrators go through while evaluating contributions. The two key factors that integrators are concerned with in their day-to-day work are quality and prioritization. The quality phenomenon manifests itself by the explicit request of integrators that pull requests undergo code review, their concern for quality at the source code level and the presence of tests. Prioritization is also a concern for integrators as they typically need to manage large amounts of contribution requests simultaneously. II. BACKGROUND AND RELATED WORK The goal of distributed software development methods is to allow developers to work on the same software product while being geographically and timezone dispersed [3]. The proliferation of distributed software development techniques was facilitated by the introduction of online collaboration tools such as source code version control systems and bug databases [4], [5]. The main differentiation across distributed software development methods is the process of integrating an incoming set of changes into a project’s code base. This change integration process has gone through many phases, as the collaboration tools matured and adapted to changing development needs; pull-based development [1] is the latest of those developments. In distributed software development, the first step towards integrating changes is evaluating the proposed contributions. This is a complex process, involving both technical [6], [7], [8] and social aspects [9], [10], [11]. Mockus et al. [6] analyzed two early OSS communities, Mozilla and Apache, and identified common patterns in evaluating contributions, namely the commit-then-review process. As an alternative, the Apache community also featured a review process through mailing list patch submissions. Rigby and Storey examined the peer review process in OSS mailing lists [7] and found that developers filter emails to reduce evaluation load, prioritize using progressive detail within emails containing patches and delegate by appending names to the patch email recipients. Jiang et al. [8] analyzed patch submission and acceptance in the Linux kernel project, which is using a preliminary pull-based development model, and found that, through time, contributions are becoming more frequent, while code reviews are taking less time. As the change submission and integration models evolve, so do the evaluation processes. Bacchelli and Bird [12] refer to lightweight, branch-based peer reviews as “modern” code review. This kind of peer review is similar to the reviews taking place in pull-based development in many aspects, with an important difference: the process for accepting a contribution is pre-determined and requires sign-off by a specific number of integrators. They find that while the stated purpose of modern code review is finding defects, in practice, the benefits in knowledge transfer and team awareness outweigh those stemming from defect finding. In a similar quantitative study [13], Rigby and Bird analyzed branch-based code review processes in OSS and commercial systems and found that reviewed patches are generally very small while two reviewers find an optimal number of defects. In recent work, Gousios et al. [1] and Tsay et al. [14] investigated quantitatively what factors underline the acceptance of contributions in pull-based development; both find similar effects, but the dominating factors (hotness of project area and social distance, respectively) are vastly different. This difference suggests that there may be no underlying processes for contribution evaluation in pull-based development that are in effect across projects. In turn, this calls for a more in-depth, qualitative study to help us understand how integrators evaluate contributions. Initial qualitative evidence on how integrators assess contributions has been reported by Pham et al. [15], but the focus of this work was the evaluation of testing practices rather than the pull-based development model. A number of social aspects also affect the evaluation of contributions. Duchneaut found that developers looking to get their contributions accepted must become known to the core team [10]. Then, core team members would use the developer’s previous actions as one of the signals for judging contributions. Similarly, Krogh et al. [9] found that projects have established implicit “joining scripts” to permit new developers to contribute to the project, according to which they examine the developers’ past actions to permit access to the main repository. There is no empirical evidence on whether the developer’s previous actions play a significant role in contribution assessment in the context of pull-based development; in fact, quantitative data from Gousios et al. [1] suggest otherwise. Finally, Marlow et al. [11] found that developers on GitHub use social signals, such as the developer’s coding activity and the developer’s social actions (e.g. following other developers), in order to form an impression of the quality of incoming contributions. III. RESEARCH QUESTIONS Our examination of the literature revealed that while several researchers have examined how developers evaluate contributions and collaborate in the context of OSS or, more recently, GitHub, no work has examined yet how integrators perceive pull-based development. With pull-based development rapidly rising in popularity, it is important to expand our understanding of how it works in practice and what challenges developers in general and integrators in particular face when applying it. Consequently, our first question explores how integrators employ the pull-based development model in their projects at the project level: RQ1: How do integrators use pull-based development in their projects? To make the analysis easier, we further refine RQ1 in the following subquestions: - RQ1.1 How do integrators conduct code reviews? - RQ1.2 How do integrators merge contributions? After a contribution has been received, the integrators must decide whether it is suitable for the project or not. Recent quantitative work identified that, across projects, simple factors such as the recent activity of the project area affected by the contribution [1] and social distance between the contributor and the integrator [14] can be used to predict whether a contribution will be accepted or not. What criteria do the integrators use to make this decision? This motivates our second research question: RQ2: How do integrators decide whether to accept a contribution? When evaluating contributions in collaborative environments, a common theme is quality assessment [6], [7], [12]. In the context of pull-based development, the asynchrony of the medium combined with its high velocity may pose additional (e.g. timing) requirements. It is beneficial to know what factors the integrators examine when evaluating the quality of a contribution and what tools they use to automate the inspection, as the results may be used to design tools that automate or centralize the evaluation process. Therefore, our third research question is as follows: **RQ3: How do the integrators evaluate the quality of contributions?** On busy projects, or in projects with busy integrators, contributions can pile up. It is not uncommon for large projects (for example Ruby on Rails) to have more than 100 pull requests open at any time. How do integrators cope with such a situation? How do they select the next contribution to work on when many need their immediate attention? This leads to our fourth research question: **RQ4: How do the integrators prioritize the application of contributions?** The challenges of online collaboration have been a very active field of study, also in the field of distributed software development [4]. The pull-based development setting is unique: the asynchrony between the production of the code and its integration in a project's code base along with the increased transparency afforded by platforms like GitHub, theoretically allow contributors and integrators to co-ordinate more efficiently. But is this so? How do integrators perceive the theoretical advantages of pull-based development in practice? By understanding the challenges that integrators face when applying pull-based development in their projects, we may better understand the limits of the pull-based method and inform the design of tools to help integrators cope with them. This leads to our final research question: **RQ5: What key challenges do integrators face when working with the pull-based development model?** ### IV. Study Design We conducted a mixed-methods exploratory study, using mostly qualitative but also quantitative data, that consisted of two rounds of data collection. In the first round, we run a pilot survey among a limited set of selected integrators. After analyzing the results of the first round, we identified emerging themes (specifically, quality and prioritization), which we address by including related questions in the second round. The survey results of the second round were further augmented, and partitioned by, quantitative results for each specific project. In this section, we describe our research method in detail. **A. Protocol** Since our aim is to learn from a large number of projects, we used surveys which scale well. **Survey Design** The study took place in two rounds, a pilot round that gave us the opportunity to field test our initial questions and the final round through which we gathered the actual responses. Both surveys were split into three logical sections; demographic information, multiple choice or Likert-scale questions and open-ended questions. The open-ended questions were intermixed with multiple choice ones; usually, the developer had to answer an open-ended question and then a related one with fixed answers. To further elicit the developers' opinions, in all questions that had predefined answers but no related open-ended question, we included an optional “Other” response. Finally, we intentionally used even Likert scales to force participants to make a choice. Overall, and excluding demographic questions, the survey included 7 open-ended questions, 7 Likert scale questions with an optional open-ended response and 6 multiple choice questions with no optional fields. The survey could be filled in in about 15 minutes. The purpose of the survey pilot was to identify themes on which we should focus the main survey. As such, the pilot survey included fewer open-ended questions, but all multiple choice questions have optional open-ended reply fields. This allowed us to test our initial question set for strongly correlated answers (we removed several potential answers from multiple choice questions) and identified two topics, namely quality and prioritization which we addressed in the main survey round. **Attracting participants** In previous work [1], we presented evidence that most repositories on GitHub are inactive, single user projects. To ensure that our sample consisted of repositories that make effective and large scale use of pull requests, we selected all repositories in our GHTorrent dataset [2] that have received at least one pull request for each week in the year 2013 (3,400 repositories). For the selected repositories, we extracted the top pull request integrators, as identified by the number of pull requests that they have merged, and build our correspondence list. For the pilot phase, we emailed 250 of those integrators randomly and received 21 answers (8% answer rate). For the data collection phase, we emailed integrators from the remaining 3,150 projects and received 749 answers (23% answer rate). The survey’s web address was sent by personal email to all participants. We did not restrict access to the survey to invited users only. In fact, several survey respondents forwarded the survey to colleagues or advertised it on social media (Twitter) without our consent. After comparing the response set with the original set of projects we contacted, we found that 35% of the responses came through third party advertising of the survey. The survey ran from April 14 to May 1, 2014. To encourage participation, we created a customized project report for each project in our correspondence list. The report included plots on the project’s performance in handling pull requests (e.g. mean close time) on a monthly basis. The reports for all projects have been published online and since then have been widely circulated among developers. Of the 749 survey respondents, 138 also expressed gratitude for their report through email. **B. Participants** The majority of our respondents self-identified as project owners (71%), while 57% work for industry. Most of them also have more than 7 years of software development experience (81%) and considerable experience (> 3 years) in geographically distributed software development (76%). 2http://ghtorrent.org/pullreq-perf/ To identify the leading groups of respondents based on the combined effect of experience, role in the project and work place, we ran the kmodes clustering algorithm (a variation of kmeans for categorical data) on the dataset. The clustering results revealed that $\frac{1}{4}$ of the respondents (275/749) are project owners with more than 7 years of industrial experience; of those, around 40% (108/275) also worked exclusively on the projects they responded about. C. Analysis We applied manual coding on the seven open-ended questions as follows: initially, three of the four authors individually coded a different set of 50 (out of 750) answers for each question. At least one and up to three codes were applied to each answer. The order of code application reflected the emphasis each answer gave on the code topic. The extracted codes were then grouped together and processed to remove duplicates and, in cases, to generalize or specialize them. The new codes were then applied on all answers by the first author. When new codes emerged, they were integrated in the code set. On average, 30% more codes were discovered because we decided to code the full dataset. In the survey, we asked integrators to optionally report a single repository name for which they handle most pull requests. 88% of the respondents did so. For the remaining 83 answers, we either resolved the repository names from the developer’s emails (since integrators were invited to participate based on a specific email), or selected the most active project the developer managed pull requests for, while we also fixed typos in repository names. We excluded from further analysis answers for which we could not obtain a repository name (61 answers). After we resolved the repository names, we augmented the survey dataset with information from the GHTorrent database. Specifically, for each project, we calculated the mean number of pull requests per month and the number of integrators for the period July 2013 to July 2014. Using those metrics, and for each one of them, we split the project population in three equally sized groups (small, medium and large). Finally, we excluded answers from projects that received no pull request in this time frame (14 answers). None of these were in our original contact list. V. Results In this section, we present our findings per research question. To enable traceability, we include direct quotes from integrators along with the answer identified in our dataset (e.g. R1 corresponds to answer 1). Similarly, in the case of coded open-ended questions, we present the discovered codes slanted. A. RQ1: How do integrators use pull-based development in their projects? 1) Overall use: To understand why and how projects use the pull-based development model, we asked integrators a multiple choice question that included the union of potential uses of pull requests that have been reported in the literature [1], [16], [15]. Respondents also had the opportunity to report other uses not in our list. Overwhelmingly, 80% of the integrators use the pull-based development model for doing code reviews and 80% to resolve issues. Perhaps more interesting is that half of the integrators use pull requests to discuss new features (as R710 commented: “experimenting with changes to get a feel if you are on the right path”). This is a variation of the GitHub-promoted way of working with pull requests, where a pull request is opened as early as possible to invite discussion on the developed feature. 60% of the integrators use pull requests to solicit contributions from the community (people with no direct commit access to the repository), which seems low given the open nature of the GitHub platform. We examined this response quantitatively, using the GHTorrent database: indeed for 39% percent of the projects that responded, no pull request originated from the project community. There is a small overlap (30%) between projects responding that they do not use pull requests to solicit contributions from the community and those that actually did not receive a pull request. Moreover, another 28% of the projects reported that they have used pull requests to solicit contributions from the community even though they did not receive any external pull requests. Only 4% (or 29) of the respondents indicated that they use pull requests for something else. The analysis of the answers reveals that the majority of the replies nevertheless aligns with the offered choice answers with two notable exceptions. Respondent R635 mentions that they use pull requests in “every commit we make. We have a policy of having every commit, even bumping up version number for next release, coming in on a PR.”. The project has effectively turned pull requests into a meta-version control system, one that only allows reviewed code to be merged. This merging behaviour is also in place within Microsoft [12] and in the Android project [13]. Another integrator is using pull requests as a time machine mechanism: R521: “Ideally, any change, because using PRs makes it easier to rollback a change if needed”. 2) Code reviews: In the time between a pull request submission and before it is accepted, it becomes a subject of inspection. 75% of the projects indicate that they do explicit code reviews on all contributions (only 7% of the projects do not review their pull requests using GitHub, but those have specified alternative ways of doing code reviews as described below). On GitHub, anyone can participate in the inspection process. 50% of the integrators report that the project’s community actively participates in code reviews; this is in contrast with Gousios et al. [1], where we found that in all projects we examined, the community discussing pull requests was bigger than the core team. In current code reviewing practices, using tools such as Gerrit [13] or Codeflow [12], code review comments are intermingled with code and a predetermined approval process is in place. GitHub offers a more liberal code reviewing system where users can provide comments on either the pull request, https://github.com/blog/1124 Accessed Jul 2014 request as a whole, the pull request code or even in individual commits comprising the pull request, but imposes no approval process. 75% of the integrators use inline code comments in the pull request to do code reviews; only 8% of the integrators report that they use commit comments. The absence of strict acceptance process support has created a market for code reviewing tools: of the 7% (or 52) of the integrators that indicated they are doing code reviews in another way, 20% (or 10) mentioned that they are explicitly using a different tool for doing code reviews. Projects have established processes for doing code reviews. One of them is delegation; 42% of the integrators delegate a code review if they are not familiar with the code under review. Delegation is again not a strictly defined process on GitHub; by convention, it can occur by referencing (@username) a user name in the pull request body, but integrators report other ways to delegate work: for example, R62 uses video conferencing to discuss pull requests and assign work load, while others (e.g. R577, R587) use external tools with support for delegation. Another process is implicit sign-off: at least 2 reviewers, e.g. R481: “We have a rule that at least 2 of the reviewers need to sign off before submitting the pull request as a whole, the pull request code or even in individual commits comprising the pull request, but imposes no approval process. 75% of the integrators use inline code comments in the pull request to do code reviews; only 8% of the integrators report that they use commit comments. The absence of strict acceptance process support has created a market for code reviewing tools: of the 7% (or 52) of the integrators that indicated they are doing code reviews in another way, 20% (or 10) mentioned that they are explicitly using a different tool for doing code reviews. Fig. 1: Signals used by integrators when deciding on whether a contribution will be accepted or not. Overall, integrators emphasize the preservation of commit metadata by avoiding textual patches and cherry-picking, while some of them use history rewriting to avoid the formation of complicated networks of branches and merges. **RQ1:** Integrators successfully use the pull-based model to accommodate code reviews, discuss new features and solicit external contributions. 75% of the integrators conduct explicit code reviews on all contributions. Integrators prefer merges that preserve commit metadata. **B. RQ2:** How do integrators decide whether to accept a contribution The second research question elicits the signals that integrators use to decide on the fate of a contribution. We asked integrators an optional open-ended question and received 324 answers. The results are summarized in Figure 1. The most important factor leading to acceptance of a contribution is its quality. Quality has many manifestations in our response set; integrators examine the source code quality and code style of incoming code, along with its documentation and granularity: “Code style and whether or not it matches project style. Overall programming practice, lack of hacks and workarounds.” (R32). At a higher level, they also examine the quality of the commit set and whether it adheres to the project conventions for submitting pull requests. A second signal that the integrators examine is project fit. As respondent R229 states: “The most important factor is if the proposed pull request is in line with the goals and target of the project”. A variation is technical fit: does the code fit the technical design of the project (R90: “Most important to us is that the contribution is in keeping with the spirit of the project’s other APIs, and that its newly introduced code follow the total and functional style of the rest of the codebase”). Integrators also examine the importance of the fix/feature with respect to the current priorities of the project. This is common in case of bug fixes: "If it fixes a serious bug with minimal changes, it's more likely to be accepted." (R131). A third theme that emerged from the integrator responses is testing. Apart from assessing the quality of contributions using higher level signals, integrators also need to assess whether the contributed code actually works. Initially, integrators treat the existence of testing code in the pull request as a positive signal. Success of test runs by a continuous integration system also reinforces trust in the code: “All tests must pass integration testing on all supported platforms...” (R94). Finally, integrators resort to manual testing if automated testing does not allow them to build enough confidence: “If other developers verified the changes in their own clones and all went fine, then we accept.” (R156). It is interesting to note that the track record of the contributors is ranked low in the integrator checklist. This is in line with our earlier analysis of pull requests, in which we did not see a difference in treatment of pull requests from the core team or from the project's community [1]. Finally, technical factors such as whether the contribution is in a mergeable state, its impact on the source code or its correctness are not very important for the eventual decision to merge to the majority of respondents. In such cases, integrators can simply postpone decisions until fixes are being provided by the contributors: “…occasionally I go through discussion with committer on how to do things better or keep the code-style held in the whole project” (R300). The postponing effect has also been observed by Rigby and Storey [7]. RQ2: Integrators decide to accept a contribution based on its quality and its degree of fit to the project's roadmap and technical design. C. RQ3: What factors do the integrators use to examine the quality of contributions? When examining contributions, quality is among the top priorities for developers. With this research question, we explore how integrators perceive quality and what tools they use to assess it, by means of a pair of compulsory open-ended and multiple choice questions. The results are summarized in Figure 2. 1) Perception: One of the top priorities for integrators when evaluating pull request quality is conformance. Conformance can have multiple readings: For R39, conformance means “it matches the project’s current style (or at least improve upon it)” (project style) while for R155 conformance is to be evaluated against fitting with internal API usage rules (architecture fit). Many integrators also examine conformance against the programming language's style idioms (e.g. PEP8 for Python code). Integrators expect the contributed code to cause minor friction with their existing code base and they try to minimize it by enforcing rules on what they accept. Integrators often relate contribution quality to the quality of the source code it contains. To evaluate source code quality, they mostly examine non-functional characteristics of the changes. Source code that is understandable and elegant, has good documentation and provides clear added value to the project with minimal impact is preferred. Apart from source code, the integrators use characteristics of the pull request as proxies to evaluate the quality of the submission. The quality (or even the existence) of the pull request documentation signifies an increased attention to detail by the submitter: “A submitter who includes a clear description of what their pull request does have usually put more time and thought into their submission” (R605). The integrators also examine the commit organization in the pull request: “well written commit messages; one commit about a single subsystem — each commit compiles separately” (R610) and its size. In the latter case, the integrators value small pull requests as it is easier to assess their impact (R246: “…the code has the minimum number of lines needed to do what it’s supposed to do” or R330: “is the diff minimal?”). Testing plays an important role in evaluating submissions. Initially, the very existence of tests in the pull request is perceived as a positive signal. The integrators also examine whether the changes in the pull request are covered by existing or new tests (test coverage), while, in 4% of the cases, they report that they exercise the changes manually (manual testing). Moreover, in performance-critical code, performance degradation is frowned upon and in some cases, integrators require proof that performance is not affected by the proposed change, e.g. in R72: “Performance related changes require test data or a test case”. Finally, integrators use social signals to build trust for the examined contribution. The most important one is the contributor’s reputation. The integrators build a mental profile for the contributor by evaluating their track record within the project (R405: “Who submitted the PR and what history did we have with him/her?”) or by searching information about the contributor’s work in other projects (R445: “looking at Some integrators also use interpersonal relationships to make judgements for the contributor and, by proxy, for their work. The process of impression building through social signals has been further elaborated by Marlow et al. [11]. 2) Tools: Quality evaluations can be supported by tools. To evaluate how often projects use tools, we gave integrators a selection of tools and asked them which ones they use in their projects. The vast majority (75%) of projects use continuous integration, either in hosted services or in standalone setups. Continuous integration services, such as Travis and CloudBees, allow projects to run their test suites against incoming pull requests, while integration with GitHub enables them to update pull requests with test outcomes. On the other hand, few projects use more dedicated software quality tools such as metric calculators (15%) or coverage reports (18%). It is interesting to note that practically all (98%) projects that use more advanced quality tools, run them through continuous integration. 99 integrators responded that they are using other tools. By going through the responses, we see that integrators use a rather limited toolset. Specifically, only a handful of integrators reported that they are using linting tools while dedicated static analysis tools are used in just two large scale C++ projects in our sample. In two more cases, the integrators reported that they rely on the language’s type system to eliminate bugs. Finally, the majority of integrators answered that they evaluate the quality manually (e.g. R291: “my brain is a powerful testing environment” or R353: “good eyes and many eyes”) even when they were asked what tools they are using to do so. D. RQ4: How do the integrators prioritize the application of contributions? Our fourth research question examines the factors integrators use to prioritize their work on evaluating contributions. To discover them, we asked integrators a compulsory open-ended question. The results are summarized in Figure 3. The first thing that integrators examine is the contribution’s urgency. In case of bug-fixing contributions, the criticality of the fix is the most important feature to prioritize by. Integrators examine at least the following factors to assess criticality: i) the contribution fixes a security issue, ii) the contribution fixes a serious new bug, iii) the contribution fixes a bug that other projects depend upon, and iv) number of issues blocked by the unsolved bug. In the case of a contribution implementing new features, integrators examine whether the contribution implements customer requested features or features required for the development of other features. Several integrators also mentioned that they just examine the type of the contribution before its criticality; it is usually project policy to handle bug fixing contributions before enhancements, as is the case with R446: “Bug fixes first, then new features. Only if all bug fix pull requests are treated.” The pull request age plays an important role in prioritization for integrators. It is interesting to note that many integrators prefer a first-in, first-out treatment of the pull requests before applying other prioritization criteria. Similarly, easy to assess (and therefore less complex) pull requests are preferred by integrators. The size of the patch, even through usually related to complexity, is used to quickly filter out small, easy to integrate contributions and process them first (e.g. R490: “The lower the number of lines/files changes, the more likely I am to process it first.”) The contributor’s track record is a relatively important factor for prioritization and usually known contributors get higher priority. As R82 states it: “If I know the person, they get high priority. Sorry, strangers.”. A related criterion is the contributor’s origin; if the contributor is another core team member or, in business settings, a colleague, some projects assign priorities to his/her contributions (e.g. R106, R183, R411), while some others specifically favour community contributions (e.g. R161, R398). Finally, it is interesting to note that 18% of all integrators in our sample are not using any prioritization processes at all. When prioritizing contributions, integrators must apply multiple criteria in a specific sequence. Figure 3 depicts the frequencies of prioritization criteria usage for all reported application sequences. What we can see is that criticality, urgency and change size contribute to most prioritization criteria application sequences, while most integrators report that they apply at most two prioritization criteria. **RQ4:** Integrators prioritize contributions by examining their criticality (in case of bug fixes), their urgency (in case of new features) and their size. Bug fixes are commonly given higher priority. One fifth of the integrators do not prioritize. E. **RQ5:** What key challenges do integrators face when working with the pull-based development model? We asked integrators an optional open-ended question and received 410 answers. We found two broad categories of challenges: technical challenges hamper the integrator’s ability to work effectively, while social challenges make it difficult for integrators to work efficiently with other project members. 1) **Technical challenges:** At the project level, maintaining quality is what most integrators perceive as a serious challenge. As incoming code contributions mostly originate from non-trusted sources, adequate reviewing may be required by integrators familiar with the project area affected by it. Reviewers’ availability is not guaranteed, especially in projects with no funded developers. Often, integrators have to deal with solutions tuned to a particular contributor requirement or an edge case; asking the contributor to generalize them to fit the project goals is not straightforward. A related issue is feature isolation; contributors submit pull requests that contain multiple features and affect multiple areas of the project. As put by R509: “Huge, unwieldy, completed bundles of ‘hey I added a lot of features and fixes ALL AT ONCE!’ that are hell to review and that I’d like to *partially* reject if only the parts were in any way separable...”. Several issues are aggravated the bigger or more popular a project is. Integrators of popular projects mentioned that the volume of incoming contributions is just too big (e.g. Ruby on Rails receives on average 7 new pull requests per day) consequently, they see triaging and work prioritization as challenges. As requests are kept on the project queue, they age: the project moves ahead in terms of functionality or architecture and then it is difficult to merge them without (real or logical) conflicts. Moreover, it is not straightforward to assess the impact of stale pull requests on the current state of the project or on each other. Another category of technical challenges is related to the experience of the contributor. Integrators note that aspiring contributors often ignore the project processes for submitting pull requests leading to unnecessary communication rounds. When less experienced developers or regular users attempt to submit a pull request, they often lack basic git skills (e.g. R42: “Lack of knowledge of git from contributors; most don’t know how to resolve a merge conflict.”). New contributors can be a valuable resource for a project; integrators report that they avoid confrontation in an effort to onboard new users. Many of the challenges reported by the integrators are bound to the distributed nature of pull-based development. Lack of responsiveness on behalf of the contributor hurts the code review process and, by extension, project flow. This is especially pronounced in the case of hit and run pull requests, as they place additional reviewing and implementation burden on the integrator team. Integrators mention that the lack of centralized co-ordination with respect to project goals can lead to “chaos. Lots of people trying to reach the same goal without coordinating” (R155). Finally, integrators also report inefficiencies in the GitHub platform itself. Specifically, many integrators complained about the quality of the code review tool offered by GitHub (R567: “A good code review tool with code analysis possibilities can help”) and made comparisons to their favourite ones (e.g. R288: “The mechanism itself is a huge step backwards from Reviewboard”) while others did not like the way GitHub handles notifications (e.g. R514: “Sifting through the GitHub information flood to find what, if any, I should address.”). 2) **Social challenges:** Integrators often have to make decisions that affect the social dynamics of the project. Integrators reported that explaining the reasons for rejection is one of the most challenging parts of their job as hurting the contributor’s feelings is something they seek to avoid. As R255 explains: “Telling people that something is wrong without hurting their feelings or giving them an incorrect idea of my intentions.”. Similarly, integrators find that asking for more work from the contributors (e.g. as a result of a code review) can be difficult at times, as they “...worry about alienating our valued contributors” (R635). Motivating contributors to keep working on the project, even in the face of rejected contributions, is not easy for integrators either. 5 R708 describes hit and run pull requests nicely: “They (contributors) send a pull request with a bug but when I ask them to fix them then they just vanish and don’t respond to GitHub e-mails.” Reaching consensus through the pull request comment mechanism can be challenging. Integrators often find themselves involved in a balancing act of trying to maintain their own vision of the project’s future and incorporating (or rejecting) contributions that are tuned to the contributor’s needs. Differences in opinion compared to the relative anonymity of the pull request comment mechanism can lead to unpleasant situations. Integrators may need to take action to maintain discussion etiquette (e.g. R449 “Dealing with loud and trigger-happy developers.”), enforce politeness rules or to stop long, unhelpful (bikeshedding) discussions (R586: “be objective and avoid off-topics in discussions”). Multiple communication channels are not helping either; integrators find it difficult to synchronize between multiple sources. On a more personal level, integrators find it difficult to handle the workload imposed by the open submission process afforded by the pull-based development model. For many of our respondents, managing contributions is not their main job; consequently finding free time to devote on handling a pull request and context switching between various tasks puts a burden on integrators. As R470 notes: “Managing pull requests is not my full-time job, but it is a component of it. Mostly it is difficult to keep track of them while also completing my other tasks.”. RQ5: Integrators are struggling to maintain quality and mention feature isolation and total volume as key technical challenges. Social challenges include motivating contributors to keep working on the project, reaching consensus through the pull request mechanism and explaining reasons for rejection without discouraging contributors. VI. DISCUSSION In this section, we compare and contrast our findings with existing work and present future work directions. A. Quality Throughout our analysis, the issue of quality evaluation was recurring. The respondents directly linked quality with acceptance while also described maintaining quality as a big challenge. According to integrators, quality emerges from attention to detail; code style, documentation, commit formatting and adherence to project conventions all help to build confidence in the contribution. The issue of quality evaluation has been repeatedly mentioned in works on patch submission [7], [18], lightweight code review [12], [13] and testing [15]; in this sense, our work reinforces earlier findings. In addition, we document in detail what factors integrators examine in contributions when doing quality assessments. An open question is how to efficiently automate the quality evaluation for pull requests. While tools that automate the evaluation of many tasks that the developers do to determine quality (e.g. code style analyzers, test coverage, metrics for software quality, impact analysis etc) do exist, we have seen that developers go little beyond testing and continuous integration. To solve this issue, one could envisage a pluggable platform that, given a pull request update, runs a suite of tools and automatically updates the pull request with a configurable quality score. For the platform to be useful, it will have to automatically learn from and adapt to project-specific behaviours. B. Testing Integrators overwhelmingly use testing as a safety net when examining contributions. The inclusion of tests in a contribution is perceived as a positive signal, while (reverse) coverage is evaluated by many integrators. 75% of our respondents run tests automatically through continuous integration services. Pham et al. examined how testing works on GitHub [15]; our work confirms many of their findings (e.g. use of testing as a quality signal, manual examination when continuous integration fails) and complements it with more quantitative data about test diffusion on GitHub projects. Moreover, it is interesting to pinpoint the contradiction with the results of our previous work [1], where we found that inclusion of test code in a contribution was not a strong factor influencing either the decision to accept or the time to decide (Tsay et al. [14] report a similar result). We speculate that this difference is due to how we modeled test inclusion (continuous rather than a dichotomous feature) in our previous study. C. Work Prioritization In large projects, integrators cannot keep up with the volume of incoming contributions. A potential solution could be a recommendation system that provides hints on which contributions need the integrator’s immediate attention. Existing work on assisted bug triaging (e.g. [19] or [20]) is not directly applicable to the pull-based model, as a pull request is not necessarily as static as a bug report. Researchers might need to come up with different methods of work prioritization that take into account the liveness and asynchrony of the pull-request model. Our analysis of how developers prioritize contribution is a first step in this direction. D. Developer Track Records One finding of this work is that a developer’s track record, while present in our response set, is not a commonly used criterion to assess or prioritize contributions by. With the raise of transparent work environments [16], and based on previous work on the subject [9], [11], one would expect that the developer’s track record would be used by the majority of integrators to make inferences about the quality of incoming contributions. Despite this, the track record is mostly used as an auxiliary signal; in both Figure 2 and Figure 3, we can see that developers equally mentioned the track record as top and second criterion for quality evaluation and prioritization. E. Community Building Community building through collaboration has been studied extensively in the context of OSS projects [9], [21], [22]. A common theme in those studies is that recruitment of new developers can be challenging [21], as core teams are reluctant to give access to the main repository without an initiation process [9]. Integrators in our study actually mentioned the opposite: it is maintaining the community momentum and motivating contributors to do more work that is not easy. Through transparency [16] and lowered barriers to participation [15], [1], the pull-based model can act as glue for communities build around projects, if integrators are keen enough on fostering their project’s communities by helping newcomers cope with tools and project processes, prioritizing the examination of community contributions and, in the extreme case, not rejecting unwanted contributions. E. A modern theory of software change In the recent years, we are witnessing that collaborative, lightweight code review is increasingly becoming the default mechanism for integrating changes, in both collocated [12] and distributed [13], [1] development. Effectively, the pull request (in various forms) is becoming the atomic unit of software change. Existing works (e.g. [23], [24]) neither did anticipate lightweight code reviews nor asynchronous integration of changes. This work can contribute to theory building by providing empirical evidence about the common practices of pull-based development. VII. Limitations We carefully designed the survey to gain insight into the work practices and challenges faced by integrators in pull-based development. We thoughtfully crafted the wording of each of the questions (to avoid ambiguous or leading questions), refining them through small pilot tests and consults with other researchers with survey research expertise, and refined the questions yet further through a larger pilot study. The response categories we supplied for many of the questions were based on the existing literature, and were likewise refined through the pilot studies. For the questions that had multiple response options, we supplied an additional “other” field which was used to uncover responses not considered that we later coded. Despite our best efforts, this work may be subject to the following limitations: Generalizability: Since we did purposive sampling from the population of integrators, the findings may not apply to other populations of integrators (e.g. developers using other tools, integrators that work private projects on GitHub or integrators that are not in the top three integrators for a given project). Moreover, in previous work [1], we found that the median number of pull requests across repositories is 2; in our sample, the smallest project had more than 400. We expect that if the study is repeated using random sampling for projects, the results will be slightly different, as the average project does not use pull requests in a high capacity. Furthermore, the integrators that responded to our survey may have introduced an additional bias to the results (non-responders may have had different insights or opinions). Researcher bias: It is possible that researcher bias may have influenced the wording of questions (perhaps to be leading) as well as the coding of the open ended questions. As discussed above, we tested the questions through pilots and had experts evaluate it for this concern. In terms of the analysis of the open ended questions, we conducted a pilot study, and three of us separately coded a sample of the responses to derive these codes. Research reactivity: The ordering of questions (one may provide context for the next one), the open ended questions, as well as a respondent’s possible tendency to to appear in a positive light (for example, they wish to think they are fair or logical), may have influenced the accuracy of the answers provided. VIII. Conclusions Our work studies the pull-based development model from the integrator’s perspective. Our goal is to better understand the work practices of integrators working with the pull-based development model and to identify the challenges they face when integrating contributions. The key contributions of this paper are as follows: - A novel way of using the GHTorrent dataset to generate targeted reports, large scale surveys and augmenting qualitative datasets with quantitative data. - A publicly available data set with 749 anonymized survey answers. - A thorough analysis of survey data resulting in answers to our research questions on topics such as work practices in pull-based development, quality evaluation of contributions, work prioritization and open challenges when working with pull requests. Our anonymized response set, our coded open-ended questions and custom-built R-based analysis and plotting tools are available in the Github repository gousiosg/pullreqs-integrators. This data set complements existing quantitative data sets (e.g. our own GHTorrent data set) and provides much needed context for analyzing and interpreting that data. Furthermore, our survey brings additional insights to the insightful but smaller scale interviews that have been conducted by other researchers on the pull based model (e.g. [16], [11], [15], [14]). We welcome replications of this work; potential directions include replications with integrators that (1) use different (non-GitHub) repositories, e.g., Bitbucket, (2) work on private repositories, and (3) work on non-pull request intensive projects. These replications will help in moving towards a theory of how pull-based development impacts distributed software development. Last but not least, our findings point to several research directions (see Section VI) and have implications for both practice and research. Based on our results, integrators can structure their contribution evaluation processes in an optimized way and be informed about common pitfalls in community management. Researchers can reuse our research methods and datasets to conduct large scale, mixed-methods research, while they can use our research findings as a basis to drive their work on pull request quality evaluation and work prioritization tools. Acknowledgements The authors would like to thank the survey participants for their time. This work has been partially funded by the NWO 639.022.314 — TestRoots project. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/7470978/pullreqs_integrators.pdf", "len_cl100k_base": 10585, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 38526, "total-output-tokens": 13173, "length": "2e13", "weborganizer": {"__label__adult": 0.00038242340087890625, "__label__art_design": 0.00032830238342285156, "__label__crime_law": 0.0002779960632324219, "__label__education_jobs": 0.0016689300537109375, "__label__entertainment": 4.684925079345703e-05, "__label__fashion_beauty": 0.0001399517059326172, "__label__finance_business": 0.0002608299255371094, "__label__food_dining": 0.0002796649932861328, "__label__games": 0.0004525184631347656, "__label__hardware": 0.00037384033203125, "__label__health": 0.00029587745666503906, "__label__history": 0.000171661376953125, "__label__home_hobbies": 7.56382942199707e-05, "__label__industrial": 0.00021636486053466797, "__label__literature": 0.0002233982086181641, "__label__politics": 0.00022554397583007812, "__label__religion": 0.00038051605224609375, "__label__science_tech": 0.0019435882568359375, "__label__social_life": 0.0001366138458251953, "__label__software": 0.004138946533203125, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00028014183044433594, "__label__transportation": 0.00034809112548828125, "__label__travel": 0.0001741647720336914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59739, 0.01731]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59739, 0.08486]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59739, 0.93413]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5142, false], [5142, 11377, null], [11377, 17328, null], [17328, 23481, null], [23481, 27361, null], [27361, 32522, null], [32522, 37018, null], [37018, 42169, null], [42169, 48234, null], [48234, 54260, null], [54260, 59739, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5142, true], [5142, 11377, null], [11377, 17328, null], [17328, 23481, null], [23481, 27361, null], [27361, 32522, null], [32522, 37018, null], [37018, 42169, null], [42169, 48234, null], [48234, 54260, null], [54260, 59739, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59739, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59739, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5142, 2], [5142, 11377, 3], [11377, 17328, 4], [17328, 23481, 5], [23481, 27361, 6], [27361, 32522, 7], [32522, 37018, 8], [37018, 42169, 9], [42169, 48234, 10], [48234, 54260, 11], [54260, 59739, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59739, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
864c62cae1e877912585545830438d0f33896227
Supplementary Material - package runibic The runibic biclustering algorithm is a parallel version of the original UniBic method. The algorithm implements very similar steps as its sequential predecessor (Wang et al., 2016): 1. Discretize the input matrix as described in (Wang et al., 2016). Determine the number of partitions using significance parameter (default: $\alpha = 0.05$) of the bicluster that is going to be identified. Set the number of partitions. By default we set $k$ to 4, as in the original method, we are interested in finding biclusters that contain at least 5 rows. 2. (executed in parallel) Create a matrix of indices $Y = y_{ij}$. Sort each of the rows of the input matrix and calculate: $y_{ij} = j$-th smallest entry in row $i$. 3. (executed in parallel) Apply Longest Common Subsequence (LCS) algorithm in order to identify seeds for the future biclusters. LCS algorithm is executed for each pair of the rows and parallelized using OpenMP; each time one of the rows is picked as the seed. Sort seeds in decreasing order based on the length of the seed. 4. (executed partially in parallel) Extend biclusters by adding strict order-preserving biclusters to the previously identified seeds. Add greedily as many rows as possible to each of the seed until the bicluster has more rows than columns, while maintaining its monotonously increasing order. 5. (executed partially in parallel) Extend biclusters to approximate ones, first by adding columns which have an error rate $r \leq 0.3$. Allow the negative trends as well as approximate ones (i.e. those in which over $t \geq 85\%$ of the trend is in the order specified by a seed row). Both columns and rows are added one at a time. The extension is made by calculating LCS between the extended seed and each of the rows (for negative trend the sequence is reverted). Remove seed from the list of potential solutions. Go back to Step 4 until the required number of biclusters is found or the list becomes empty. 6. Return nbics biclusters (up to a 100) and wrap them into biclust::Biclust class. Implementation details In the provided implementation of UniBic algorithm, we migrated the original code from C to C++11 programming language and added OpenMP support. Code refactoring allowed us to take advantage of multiple aspects of modern-style language programming: - safer and more modern memory management replaced difficult to maintain C style memory allocations and deallocations, - fast and efficient containers from Standard Template Library (STL), such as vectors, sets and algorithms, were used for acceleration of common operations like iterate, sort, search, count, or copy, - the original implementation in most cases allocated a large number of simple arrays and used loops with slow indexing for common operations, - removing many redundant copying and memory allocations, - fixing a couple of memory leaks, which caused segmentation fault for some datasets. Porting the code improved interpretability of the code allowed to remove multiple redundancies present in the previous UniBic implementation. For example, we replaced the original four functions that calculated the Longest Common Subsequence with a single one with multiple options. Similar improvements were made in other code sections, for example: in discretization, in calculation of Longest Common Subsequence between each pair of rows, in clustering and bicluster expansion parts. In order to provide more insightful analysis into the modules of UniBic algorithm, we separated and exported the major steps of the original method. Thus, the algorithm may be run using either a single command, or executed step by step. This provides much better control over the code, improves clarity of the method, and allows its future customization. The algorithm provided in runibic package is divided into the following sections: - set_runibic_params - a function that sets the default parameters for algorithm, - runDiscretize - the original UniBic discretize approach, which take into account the number of ranks from ‘div’ parameter (default value: 15) and quantile value from ‘q’ parameter (by default: div/number_of_columns), - unisort - a function that sorts the rows of a matrix and returns the indexes of sorted columns in each row (Step 1), - calculateLCS - a function that calculates the Longest Common Subsequence (LCS) between each unique pair of rows in the matrix, returns a list of LCS lengths and row pairs (Step 2), - cluster - the main biclustering method which builds biclusters based on the input data and calculateLCS results (Steps 3-5). By designing a modular structure of the package we intended to simplify flexible modifications of the original algorithm. Such methods may use different preprocessing or ranking techniques, or expand bicusters using different rows as seeds. An example includes different method of sorting results from calculateLCS. The proposed method, which is based on a stable STL sort, could be used as an alternative to the old C style pointer and sorting based on Fibonacci Heap. In our opinion the proposed method is more robust and better reflects the original intention. The choice of the method may implicate the outcome of the algorithm, as different LCSes of the same length may be chosen as seeds. In order to improve the algorithm execution time the most crucial and computationally intensive parts of the code were parallelized using OpenMP standard. One of the most time consuming steps of UniBic is calculating Longest Common Subsequence (LCS) between unique pairs of rows. We rearranged the code and achieved parallelization where each core of the CPU calculates unique LCS between unique pair of rows simultaneously. Similarly, we also paralleled the data preprocessing required by the method, so as expansions of each of the biclusters, which required calculations of LCS between each row and the seed. All mentioned changes allowed us to obtain biclustering results in several minutes on the modern computer with modern processor. Example: workflow with SummarizedExperiment In this example we present how to use runibic package with SummarizedExperiment class that contains lists of assays. We start by loading the required packages: ```r library(runibic) library(SummarizedExperiment) library(gplots) ``` We create a matrix with 1000 rows and 100 columns and implant inside it 4 upregulated biclusters of size 20x20. We intentionally sort each the values in each row of the biclusters in monotonously increasing order. We preliminary inspect the matrix using heatmap.2 function from gplots library. The results are presented in Fig. 2. ```r set.seed(42) m <- matrix( rnorm(1000*100,mean=0,sd=1), 1000, 100) rownames(m) <- paste("row",1:1000, sep="_") colnames(m) <- paste("col",1:100, sep="_") for (k in 0:4) { for (i in 1:20) { bicl <- rnorm(20,mean=0,sd=1)+4 m[i+k*20,1:20+k*20]=sort(bicl) } } heatmap.2(m, Colv= FALSE, Rowv = FALSE, dendrogram="none" , density.info="none" , trace="none" , margins=c(11,11), labRow = FALSE, labCol = FALSE) ``` ![Fig. 1. A heatmap for the randomly initialized martix with 5 implanted biclusters.](image) In the next step we build a SummarizedExperiment from an exemplary matrix. We assume that it has a single assay only in which matrix m is contained. ```r se <- SummarizedExperiment(assays=list(m)) ``` Our next step is to run runibic biclustering method. For the matrices that have less than 2000 rows, runibic algorithm in Bioconductor 3.6 requires to set two additional parameters q and div. Starting from Bioconductor 3.7, setting those parameters is obsolete. We also specify nbic, which is the number of required biclusters to be returned by the runibic method. ```r ``` # Run runibic on the toy dataset # For Bioconductor 3.6 res1 <- runibic(m, q = 15/ncol(m), div = 15, nbic = 5) res2 <- runibic(se, q = 15/ncol(m), div = 15, nbic = 5) # Starting from Bioconductor 3.7: res1 <- runibic(m, nbic = 5) res2 <- runibic(se, nbic = 5) # Inspect the results res1 # An object of class Biclust # Call: # NULL # Number of Clusters found: 5 # First 5 Cluster sizes: # BC 1 BC 2 BC 3 BC 4 BC 5 # Number of Rows: 19 19 13 12 14 # Number of Columns: 13 13 17 17 14 We inspect the results by drawing a histogram for the first bicluster. Two calls are compared: either using an input matrix, or a `SummarizedExperiment` class. # Extract the first bicluster either from matrix m or from SummarizedExperiment class: heatmap.2(bicluster(m, res1, 1)[[1]], Colv = FALSE, Rowv = FALSE, dendrogram = "none", density.info = "none", trace = "none", margins = c(5, 5)) heatmap.2(bicluster(assays(se)[[1]], res2[[1]], 1)[[1]], Colv = FALSE, Rowv = FALSE, dendrogram = "none", density.info = "none", trace = "none", margins = c(5, 5)) --- **Fig. 2.** A heatmap for the examined bicluster. Now we inspect the second bicluster from the `SummarizedExperiment`. As this class may contain multiple assays, it is important to refer to the requested assay. In our toy example the list contains a single assay, therefore we will use `[[1]]` to extract biclusters for this particular assay. ``` #extract the second bicluster from the SummarizedExperiment class bicluster(assays(se)[[1]], res2[[1]], 2) ``` ``` #Bicluster2 # col 61 col 63 col 64 col 66 col 68 col 69 col 71 col 72 col 74 col 76 col 78 col 79 col 80 #row 68 0.9035566 2.648049 2.280202 2.638208 3.094568 3.202966 3.483871 3.513828 3.882798 3.994085 4.583249 5.217786 5.238829 ``` #notice that the same result could be obtained with the following command: ``` bicluster(m, res1, 2) ``` Example: workflow with ExpressionSet In the second example we show how to use and visualize the results of runibic on the real dataset GDS589, which could be downloaded using GEOquery package. ```r library(runibic) library(GEOquery) library(pcaMethods) library(QUBIC) library(qgraph) ``` In the first lines we load all required libraries, i.e. our package runibic, GEOquery (in order to download the dataset from Gene Expression Omnibus), pcaMethods (for missing values imputation), QUBIC (in order to create gene interaction network), and qgraph to visualize the network in form of a graph. We are ready to download the dataset from NCBI Gene Expression Omnibus (GEO) repository. We use GDS2eSet function in order to transform dataset to ExpressionSet. The dataset contains 8799 features and 122 samples. In order to prepare the matrix for runibic we apply llsImpute methods from pcaMethods package in order to impute missing values. Once the data is ready for the algorithm, we apply runibic procedure and analyze the results. ```r gdsname = "GDS589" arr <- getGEO(gdsname, destdir = "./") eset <- GDS2eSet(arr, do.log2 = F) eset #ExpressionSet (storageMode: lockedEnvironment) # # element names: exprs # #protocolData: none # #sampleData: # # sampleNames: GSM15231 GSM15232 ... GSM15188 (122 total) # # varLabels: sample strain tissue description # # varMetadata: labelDescription # #featureData: # # featureNames: A01157cds_s ... Z96106_at (8799 total) # # fvarLabels: ID Gene title ... GO:Component ID (21 total) # # featureMetadata: Column labelDescription # #experimentData: use 'experimentData(object)' # # pubMedIds: 15990018 # #Annotation: #Extract expression data array <- exprs(eset) #Apply missing value imputation to the data result <- pcaMethods::llsImpute(data.matrix(array[rowSums(is.na(array)) != ncol(array),],k=5)) array <- completeObs(result) ``` Please notice the usage of useLegacy = TRUE flag, which was introduced in the runibic package for Bioconductor 3.7. Without this flag, the approximate trends are added to each of the seeds according to the equation that originally appeared in UniBic implementation (1): $$r = \lfloor cols(B) \ast t - 1 \rfloor$$ where $B$ is a given Bicluster, $cols$ is the number of its columns in the consistent order with the seed $t=0.85$ by default. Starting from Bioconductor 3.7, the following threshold is used for approximate trends (2): $$r = \lfloor cols(B) \ast t \rfloor$$ In the previous version of the software, if the number of columns of the bicluster is equal to 4, all rows with at least 2 columns will be added to some narrow biclusters, what would unnecessarily inflate their sizes. From Bioconductor 3.7, only trends that have at least 3 out of 4 columns in the consistent order with the seed will be included. We are ready to execute runibic procedure on the gene expression array. ```r #Apply runibic to the expression data #For Bioconductor 3.6: res <- runibic(array) #For Bioconductor 3.7: res <- runibic(array, useLegacy=TRUE) ``` # Inspect the result res # An object of class Biclust call: # NULL # Number of Clusters found: 100 # First 5 Cluster sizes: # BC 1 BC 2 BC 3 BC 4 BC 5 # Number of Rows: 2344 1011 867 696 525 # Number of Columns: 5 6 6 6 7 Runibic returned 100 biclusters. Some of them are very large and difficult to visualize, because of their sizes. In our next step we visualize one of the results of biclustering with *runibic* using *qunetwork* function from *QUBIC* package. The function creates a networks of co-expressed genes based on the identified bicluster. The network is later visualized using *qgraph* function. The results are presented in Fig. 3. For the visualization purposes, we have chosen 31st bicluster, which has limited number of rows (i.e. less than 50). The vertices of the graph represent genes, and width and color of the edges represent the strength of correlations between the genes. Negative correlations are plotted in red color, positive in green. ```r # Create a network of connections for the given bicluster net1 <- qunetwork(array, res, number = 31, group = 31, method= "spearman") # Extract edge coloring: g <- qgraph(chbind(1:6,1:6,c(-0.9, -0.6, -0.3, 0.3, 0.6,0.9)), DoNotPlot=FALSE) # Set colors and width of the edges col <- g$graphAttributes$Edges$color lwd <- g$graphAttributes$Edges$width # Create a plot: qgraph(net1[[1]], groups= net1[[2]], layout= "spring", minimum=0.6, color = chbind(rainbow(length(net1[[2]]) - 1), "gray", edge.label= FALSE)) # Add a legend: legend(x=1.35,y=0.02,title="Correlation value:", legend=c(-0.9, -0.6, -0.3, 0.3, 0.6,0.9), col=col, lwd=lwd, bty="n") ``` ![Fig. 3. Graph representing the network of correlations for the 31st bicluster.](image-url) You may notice how strongly correlated are some genes within the bicluster. Also gene with some weaker correlations are included. We create a plot with parallel coordinates for the given bicluster in order to inspect the pattern of the bicluster. The results are presented in Fig. 4. ```r #Show parallel coordinates plot of the same bicluster. parallelCoordinates(array, res, 31) ``` ![Parallel coordinates plot presenting gene expressions of 31st bicluster.](image) We are interested to find if the results found in the previous step are bio-meaningful. Thus, we perform biological validation of the findings using the annotation of the dataset (rgu34a) as well as GOSTats package in order to perform enrichment analysis using hypergeometric test. ```r #load required annotation library(rgu34a.db) library(GOstats) #We extract the name of the genes that belong to a given bicluster. #First we extract probe identifiers from biclust and map them into probe names biclusterID=1 probeids <- row(matrix(res@RowxNumber[, biclusterID]))[res@RowxNumber[, biclusterID]==T] probes <- rownames(array)[probeids] #Next, we map genes to unique ENTREZ identifiers, keeping only the first name of the gene. genes <- unique(mapIds(rgu34a.db, keys=as.character(probes), c("ENTREZID"), keytype="PROBEID", multiVals="first")) #Similarly we map names of the probes in an array. universe <- unique(mapIds(rgu34a.db, keys=as.character(rownames(array)), c("ENTREZID"), keytype="PROBEID", multiVals="first")) #We prepare the required parameters for hypergeometric test and run the analysis params <- new("GOHyperGParams", geneIds = genes, universeGeneIds = universe, ontology = "BP", annotation = "rgu34a.db") hgOver <- hyperGTest(params) ``` During the procedure the following warning will appear, as the relation between probes and genes was not unique. ```r # select () returned 1:many mapping between keys and columns ``` We would like to take a closer look at the findings of the algorithm, thus we inspect the result of the test. ```r print (summary(hgOver)) ``` <table> <thead> <tr> <th>#</th> <th>G O B P I D</th> <th>Pvalue</th> <th>OddsRatio</th> <th>ExpCount</th> <th>Count</th> <th>Size</th> <th>Term</th> </tr> </thead> <tbody> <tr> <td>#1</td> <td>GO:0002376 1.658316e−07</td> <td>1.517021</td> <td>265.215965</td> <td>328</td> <td>789</td> <td>immune system process</td> <td></td> </tr> <tr> <td>#2</td> <td>GO:0002684 3.660337−07</td> <td>1.788777</td> <td>111.599113</td> <td>154</td> <td>332</td> <td>positive regulation of immune system process</td> <td></td> </tr> <tr> <td>#3</td> <td>GO:0006669 4.675320e−07</td> <td>1.926398</td> <td>84.035477</td> <td>121</td> <td>250</td> <td>lymphocyte activation</td> <td></td> </tr> <tr> <td>#4</td> <td>GO:0008771 3.825497e−07</td> <td>1.767646</td> <td>112.271397</td> <td>154</td> <td>334</td> <td>gland development</td> <td></td> </tr> <tr> <td>#5</td> <td>GO:0008283 3.085573e−06</td> <td>1.460346</td> <td>284.039911</td> <td>344</td> <td>845</td> <td>cell proliferation</td> <td></td> </tr> <tr> <td>#6</td> <td>GO:0004292 1.741112e−06</td> <td>1.646302</td> <td>26.190687</td> <td>319</td> <td>780</td> <td>homeostatic process</td> <td></td> </tr> <tr> <td>#7</td> <td>GO:0002062 1.898996e−06</td> <td>1.604551</td> <td>155.297561</td> <td>201</td> <td>462</td> <td>regulation of immune system process</td> <td></td> </tr> <tr> <td>#8</td> <td>GO:0070482 2.506913e−06</td> <td>1.756590</td> <td>99.834146</td> <td>157</td> <td>378</td> <td>response to oxygen levels</td> <td></td> </tr> <tr> <td>#9</td> <td>GO:0001775 3.132966−06</td> <td>1.652226</td> <td>127.061641</td> <td>168</td> <td>378</td> <td>cell activation</td> <td></td> </tr> <tr> <td>#10</td> <td>GO:0068513 5.364295e−06</td> <td>1.344049</td> <td>492.784035</td> <td>539</td> <td>1268</td> <td>animal organ development</td> <td></td> </tr> <tr> <td>#11</td> <td>GO:0009605 5.999994e−06</td> <td>1.380084</td> <td>358.999557</td> <td>419</td> <td>1068</td> <td>response to external stimulus</td> <td></td> </tr> <tr> <td>#12</td> <td>GO:002694 6.874545e−06</td> <td>1.916519</td> <td>66.550698</td> <td>96</td> <td>238</td> <td>regulation of leukocyte activation</td> <td></td> </tr> <tr> <td>#13</td> <td>GO:0036293 7.897443e−06</td> <td>1.741327</td> <td>92.102882</td> <td>126</td> <td>274</td> <td>response to decreased oxygen levels</td> <td></td> </tr> <tr> <td>#14</td> <td>GO:0070371 8.163009e−06</td> <td>2.056243</td> <td>52.774279</td> <td>79</td> <td>187</td> <td>ERK1 and ERK2 cascade</td> <td></td> </tr> <tr> <td>#15</td> <td>GO:0006429 8.283770−06</td> <td>1.544665</td> <td>158.995122</td> <td>202</td> <td>473</td> <td>epithelium development</td> <td></td> </tr> <tr> <td>#16</td> <td>GO:0002682 1.898996e−06</td> <td>1.604551</td> <td>155.297561</td> <td>201</td> <td>462</td> <td>regulation of immune system process</td> <td></td> </tr> <tr> <td>#17</td> <td>GO:002694 6.874545e−06</td> <td>1.916519</td> <td>66.550698</td> <td>96</td> <td>238</td> <td>response to oxygen levels</td> <td></td> </tr> <tr> <td>#18</td> <td>GO:0009605 5.999994e−06</td> <td>1.380084</td> <td>358.999557</td> <td>419</td> <td>1068</td> <td>response to external stimulus</td> <td></td> </tr> <tr> <td>#19</td> <td>GO:0002062 1.898996e−06</td> <td>1.604551</td> <td>155.297561</td> <td>201</td> <td>462</td> <td>regulation of immune system process</td> <td></td> </tr> </tbody> </table> We have discovered that multiple GO terms are enriched within the result. We still need to adjust for multiple hypothesis testing. We perform Benjamini-Hochberg procedure for false discovery rate, as some of the previously detected p-values might have appeared by chance. We apply a cutoff of 0.05. ```r # adjust p-values using Benjamini-Hochberg procedure Pval <- p.adjust(pvalues(hgOver), method="BH", n = length(pvalues(hgOver))) # count number of GO terms that remained after application of a threshold enrichedNum <- length(which(Pval<0.05)) # format the columns and present the results enr <- data.frame(summary(hgOver)[1:enrichedNum, c( 'G O B P I D', 'Pvalue' )] , Pval[1:enrichedNum], summary(hgOver)[1:enrichedNum, c( 'OddsRatio', 'ExpCount', 'Count', 'Size', 'Term' )]) # adj-Pvalue ``` <table> <thead> <tr> <th>#</th> <th>G O B P I D</th> <th>Pvalue</th> <th>OddsRatio</th> <th>ExpCount</th> <th>Count</th> <th>Size</th> <th>Term</th> </tr> </thead> <tbody> <tr> <td>#1</td> <td>GO:0002376 1.658316e−07</td> <td>1.517021</td> <td>265.215965</td> <td>328</td> <td>789</td> <td>immune system process</td> <td></td> </tr> <tr> <td>#2</td> <td>GO:0002684 3.660337−07</td> <td>1.788777</td> <td>111.599113</td> <td>154</td> <td>332</td> <td>positive regulation of immune system process</td> <td></td> </tr> <tr> <td>#3</td> <td>GO:0006669 4.675320e−07</td> <td>1.926398</td> <td>84.035477</td> <td>121</td> <td>250</td> <td>lymphocyte activation</td> <td></td> </tr> <tr> <td>#4</td> <td>GO:0008771 3.825497e−07</td> <td>1.767646</td> <td>112.271397</td> <td>154</td> <td>334</td> <td>gland development</td> <td></td> </tr> <tr> <td>#5</td> <td>GO:0008283 3.085573e−06</td> <td>1.460346</td> <td>284.039911</td> <td>344</td> <td>845</td> <td>cell proliferation</td> <td></td> </tr> <tr> <td>#6</td> <td>GO:0004292 1.741112e−06</td> <td>1.646302</td> <td>26.190687</td> <td>319</td> <td>780</td> <td>homeostatic process</td> <td></td> </tr> <tr> <td>#7</td> <td>GO:0002062 1.898996e−06</td> <td>1.604551</td> <td>155.297561</td> <td>201</td> <td>462</td> <td>regulation of immune system process</td> <td></td> </tr> <tr> <td>#8</td> <td>GO:0070482 2.506913e−06</td> <td>1.756590</td> <td>99.834146</td> <td>157</td> <td>378</td> <td>response to oxygen levels</td> <td></td> </tr> <tr> <td>#9</td> <td>GO:0001775 3.132966−06</td> <td>1.652226</td> <td>127.061641</td> <td>168</td> <td>378</td> <td>cell activation</td> <td></td> </tr> <tr> <td>#10</td> <td>GO:0068513 5.364295e−06</td> <td>1.344049</td> <td>492.784035</td> <td>539</td> <td>1268</td> <td>animal organ development</td> <td></td> </tr> <tr> <td>#11</td> <td>GO:0009605 5.999994e−06</td> <td>1.380084</td> <td>358.999557</td> <td>419</td> <td>1068</td> <td>response to external stimulus</td> <td></td> </tr> <tr> <td>#12</td> <td>GO:002694 6.874545e−06</td> <td>1.916519</td> <td>66.550698</td> <td>96</td> <td>238</td> <td>regulation of leukocyte activation</td> <td></td> </tr> <tr> <td>#13</td> <td>GO:0036293 7.897443e−06</td> <td>1.741327</td> <td>92.102882</td> <td>126</td> <td>274</td> <td>response to decreased oxygen levels</td> <td></td> </tr> <tr> <td>#14</td> <td>GO:0070371 8.163009e−06</td> <td>2.056243</td> <td>52.774279</td> <td>79</td> <td>187</td> <td>ERK1 and ERK2 cascade</td> <td></td> </tr> <tr> <td>#15</td> <td>GO:0006429 8.283770−06</td> <td>1.544665</td> <td>158.995122</td> <td>202</td> <td>473</td> <td>epithelium development</td> <td></td> </tr> <tr> <td>#16</td> <td>GO:0002062 1.898996e−06</td> <td>1.604551</td> <td>155.297561</td> <td>201</td> <td>462</td> <td>regulation of immune system process</td> <td></td> </tr> <tr> <td>#17</td> <td>GO:0009605 5.999994e−06</td> <td>1.380084</td> <td>358.999557</td> <td>419</td> <td>1068</td> <td>response to external stimulus</td> <td></td> </tr> <tr> <td>#18</td> <td>GO:0002062 1.898996e−06</td> <td>1.604551</td> <td>155.297561</td> <td>201</td> <td>462</td> <td>regulation of immune system process</td> <td></td> </tr> </tbody> </table> We have discovered that multiple GO terms are enriched within the result. We still need to adjust for multiple hypothesis testing. We perform Benjamini-Hochberg procedure for false discovery rate, as some of the previously detected p-values might have appeared by chance. We apply a cutoff of 0.05. Example: comparison with multiple other methods implemented in R In this example we present the comparison of our method with multiple other biclustering algorithms available at Bioconductor that support `Biclust` class interface. We start with loading all required libraries. ```r library(runibic) library(QUBIC) library(isa2) library(BiBitR) library(biclust) data(BicatYeast) ``` Now, we perform biclustering using multiple aforementioned biclustering method and extract Biclust object: ```r resCC <- biclust::biclust(BicatYeast, method = R(BC)) resBimax <- biclust::biclust(BicatYeast, method = R(BCBimax)) resQuest <- biclust::biclust(BicatYeast, method = R(Q)) resBiBit <- BiBitR::BiBitWorkflow(matrix = binarize(BicatYeast), minr = 50, minc = 5, noise = 0.2)$Biclust ``` Some charts will pop up during the previous analysis, as they are default for BiBit package workflow. Finally, we inspect the summary of the biclustering results using `showinfo` command from `QUBIC` package. ```r QUBIC::showinfo(BicatYeast, c(resCC, resBimax, resQuest, resBiBit, resPlaid, resXmotifs, resUnibic)) ``` Biclustering packages in Bioconductor (3.6) Bioconductor in version 3.6 provides implementation of the following biclustering algorithms: - ISA (Bergmann et al., 2003) - implemented in eisa and isa2 Bioconductor packages (Csardi et al., 2010), - CC (Cheng and Church, 2000), Plaid methods (Lazzeroni and Owen, 2002), Bimax (Prelić et al., 2006), xMotifs (Murali and Kasif, 2003), Quest (Kaiser, 2011), Spectral Cluger et al. (2003) - all available in biclust package (Kaiser et al., 2015), - FABIA, FABIAS, FABIAP - available in Bioconductor package fabia (Hochreiter et al., 2010), - HapFABIA - implemented in package hapFabia (Hochreiter, 2013) - QUBIC (Li et al., 2009) - implemented in more modern package QUBIC (Zhang et al., 2017) and older package qubic (Zhang, 2015), - MChicest - available in Bioconductor package MChicest (Bentham, 2017), - SVD (Lee et al., 2010) and S4VD (Sill et al., 2011) - available in Bioconductor package S4VD (Sill and Kaiser, 2015), - Iterative Binary Biclustering of Gene sets - available in Bioconductor package iBBiG (Gusenleitner et al., 2012), - Biclustering Analysis and Results Exploration - available in package BicARE (Gestraud, 2008), - Biclustering Algorithm for extracting bit-patterns from binary data-sets, available in package BiBitR () Session Info Below is information about the session under which the previous code was generated. ```r > sessionInfo() R version 3.4.4 (2018-03-15) Platform: x86_64-pc-linux-gnu (64-bit) Running under: Ubuntu 16.04.4 LTS Matrix products: default BLAS: /usr/lib/libblas/libblas.so.3.6.0 LAPACK: /usr/lib/lapack/liblapack.so.3.6.0 locale: [1] LC_CTYPE=en_US.UTF-8 [2] LC_NUMERIC=C [3] LC_TIME=en_US.UTF-8 [4] LC_COLLATE=en_US.UTF-8 [5] LC_MONETARY=en_US.UTF-8 [6] LC_MESSAGES=en_US.UTF-8 [7] LC_PAPER=en_US.UTF-8 [8] LC_NAME=C [9] LC_ADDRESS=C [10] LC_TELEPHONE=C [12] LC_IDENTIFICATION=C attached base packages: [1] parallel stats4 grid stats graphics grDevices utils datasets methods [10] base other attached packages: [1] fabia_2.24.0 BiBitR_0.3.1 isa2_0.3.5 [4] QUBIC_1.6.0 runlibc_1.0.2 SummarizedExperiment_1.8.1 [7] DelayedArray_0.4.1 matrixStats_0.52.2 Biobase_2.38.0 [10] GenomicRanges_1.30.3 GenomeInfoDb_1.14.0 IRanges_2.12.0 [13] S4Vectors_0.16.0 Bioconductor_3.24.0 biclust_2.0.0 [16] lattice_0.20-35 colorspace_1.3-2 MASS_7.3-47 loaded via a namespace (and not attached): [1] mclust_5.4 Rcpp_0.12.17 flexclust_1.3-5 mvtnorm_1.0-7 [5] tidyverse_0.8.1 class_7.3-14 VGAM_1.1.5 R6_2.2.2 [9] plyr_1.8.4 ggplot2_2.2.1 pillar_1.2.2 zlibbioc_1.24.0 [13] rlang_0.2.0 lazyeval_0.2.1 dplyr_0.7.5-7 curl_3.0 [17] whisker_0.3-2 kernlab_0.9-25 Matrix_1.2-11 randomcolor_1.1.0 [21] additivityTests_1.1-4 Rsne_0.13 stringr_1.2.0 foreign_0.8-69 [25] RCurl_1.95-4.8 munsell_0.4.3 compiler_3.4.4 nnet_7.3-12 [29] tibble_1.4.2 gridExtra_2.3 GenomeInfoDBData_0.99.1 dendscendt_1.7.0 [33] viridisLite_0.2.0 bitops_1.0-6 jsonlite_1.5 gtable_0.2.0 [37] magrittr_1.5 scales_0.5.0 stringi_1.2.2 XVector_0.18.0 [41] viridis_0.4.0 flexmix_2.3-14 testthat_1.0.0 robustbase_0.92-8 [45] tools_3.4.4 fpc_2.1-11 glue_1.2.0 trimcluster_0.1-2 [49] Biobase_2.38.0 purrr_0.2.4 cluster_2.0.6 prabclus_2.2-6 [53] modeltools_0.2-21 ``` References
{"Source-Url": "https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/bioinformatics/PAP/10.1093_bioinformatics_bty512/4/bty512_supp.pdf?Expires=2147483647&Key-Pair-Id=APKAIE5G5CRDK6RD3PGA&Signature=NBRv9zUyatXbvBnIVWTpCNZuNZs9TzQe7bk9ZS4SmqouXq-MyPea9fXutjy3AKo-ZXn2uWkV887CVbCnMZzGGN9H0hKwH41j6L%7E8X0ZiojXceZYI-6xXpxsV3apC%7EtX5jwmLmE1clxsuNNlxKRQ48scXSt03hBL8NqjCzkrQpCtyye41rK9A4-Ldchf4moLFj2NjZjlyonV8VRIMntK%7EDlfcCGNkHZzWtLZHEuDvi4Fc63DaSdsP9X1JiawNVFxAWQo8P08PpnTJZVkq7KJ6ZRyx6R4x9lKU5ceTnD2WLSnflhf3lD-jQwDXosE4912jvR-6Po5ORHq7F-labBA3%7Ew__", "len_cl100k_base": 9059, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39815, "total-output-tokens": 12952, "length": "2e13", "weborganizer": {"__label__adult": 0.0004036426544189453, "__label__art_design": 0.0005726814270019531, "__label__crime_law": 0.0005407333374023438, "__label__education_jobs": 0.0010843276977539062, "__label__entertainment": 0.0001952648162841797, "__label__fashion_beauty": 0.00027179718017578125, "__label__finance_business": 0.0006589889526367188, "__label__food_dining": 0.0005850791931152344, "__label__games": 0.0009593963623046876, "__label__hardware": 0.0014390945434570312, "__label__health": 0.0011243820190429688, "__label__history": 0.0005650520324707031, "__label__home_hobbies": 0.00022232532501220703, "__label__industrial": 0.0009450912475585938, "__label__literature": 0.00033211708068847656, "__label__politics": 0.0005865097045898438, "__label__religion": 0.0007090568542480469, "__label__science_tech": 0.353271484375, "__label__social_life": 0.0002193450927734375, "__label__software": 0.0233001708984375, "__label__software_dev": 0.6103515625, "__label__sports_fitness": 0.0004756450653076172, "__label__transportation": 0.0005292892456054688, "__label__travel": 0.00033354759216308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31127, 0.12887]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31127, 0.25139]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31127, 0.68813]], "google_gemma-3-12b-it_contains_pii": [[0, 6050, false], [6050, 7757, null], [7757, 8860, null], [8860, 11777, null], [11777, 14804, null], [14804, 16540, null], [16540, 18269, null], [18269, 23772, null], [23772, 23772, null], [23772, 24874, null], [24874, 28112, null], [28112, 31127, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6050, true], [6050, 7757, null], [7757, 8860, null], [8860, 11777, null], [11777, 14804, null], [14804, 16540, null], [16540, 18269, null], [18269, 23772, null], [23772, 23772, null], [23772, 24874, null], [24874, 28112, null], [28112, 31127, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31127, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31127, null]], "pdf_page_numbers": [[0, 6050, 1], [6050, 7757, 2], [7757, 8860, 3], [8860, 11777, 4], [11777, 14804, 5], [14804, 16540, 6], [16540, 18269, 7], [18269, 23772, 8], [23772, 23772, 9], [23772, 24874, 10], [24874, 28112, 11], [28112, 31127, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31127, 0.1038]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
d12f625d15d11a2c623e6eb93fac9df1b03f73f1
A Practical Guide to Running SNORT on Red Hat Linux 7.2 and Management Using IDS Policy Manger MySQL William Metcalf A practical guide to running SNORT on Red Hat Linux 7.2 and Management Using IDS Policy Manager MySQL+IIS+ACIDFrom your Workstation. William Metcalf GIAC Security Essentials Certification April 02, 2002 v1.3 Introduction In the brief time that I have been on this planet the state of computing has changed drastically. The high-powered computers and blink of the eye internet connectivity once reserved for universities and major corporations has now become a staple in small businesses around the United States and the world. As more and more these small businesses get connected to the global network we call the internet, we must focus our attention on securing these systems. It used to be that hacking systems such as these could only be done by high skilled programmers and network gurus. This has also changed with the birth of “script kiddies” who are hackers who use programs and scripts that are freely available and easy to use to attack such systems. Firewalls and virus protection software add layers of security but in most cases this is not enough. SNORT which is a free NIDS (*Network Intrusion Detection System*) adds another layer to your security blanket. To give you a better picture of what I mean by this, I will quote Wes Simonds of Search Networking as saying “If a firewall is the initial gate, SNort is the highly-trained Doberman pack that roams the company grounds pawing at intruders, sniffing at their packets in a deceptively unobtrusive manner and occasionally when things are manifestly uncool biting them gently in half.” SNORT will watch and analyze your network traffic and will alert you when there are possible hacking attempts against your computer system(s). SNORT was originally written by Martin Roesh for *nix operating systems, and according to one study can keep up with the heavy weights such as Cisco and ISS (Study done by the Gartner Group [http://www.gartner.com/DisplayTechOverview?id=320015](http://www.gartner.com/DisplayTechOverview?id=320015)). I will show you how to setup snort on Red Hat 7.2 and I will show you how to manage your sensor and view alerts from your windows 2000 workstation. Getting Started There are a few things that we are going to need to get started: Access to the internet Access to a cd-burner A computer dedicated to Red Hat and snort. A computer used as a management workstation. This can be your desktop. In this example I use windows 2000 but this configuration should work for any Windows NT variant. A hub to monitor, if you are in a switched network you must setup a monitor port for the sensor where the traffic you desire to watch can be mirrored to. If this is the case you must configure your sensor with two network cards. One Network card to plug into the monitor port and one to communicate with your management workstation. Two static IP address on your network. Assign one to the sensor and one to the management workstation. Snort on Red Hat Linux 7.2 Installing Red Hat 1. Burn the Red Hat 7.2 ISO images to CD the ISO images should be enigma-i386-disc1.iso and enigma-i386-disc2.iso, most cd-writing software supports ISO images. I know that Roxio Easy CD Creator and Ahead Nero Burning ROM both support this format. 2. Configure your bios to auto-boot from cd-rom. If your bios does not support auto-booting to cd consult the red hat 7.2 documentation http://www.redhat.com/docs for creating an installation boot disk. Insert Red Hat disc 1 into your cd-rom and reboot your computer. 3. After your computer posts a text based Red Hat Linux Installation screen will appear. Press the <Enter> key to continue 4. On the Language Selection Screen select English as the language you wish to use during install. Select the next button to proceed to the keyboard configuration screen. 5. Select your keyboard model, layout and select enable dead keys. Select the next button to proceed to the Mouse Configuration screen. 6. Select your mouse type and select whether or not to emulate three buttons. Select the next button to proceed to the Red Hat welcome screen. 7. Select the next button to proceed to the Installation Type screen. 8. Select the custom installation radio button. Select the next button to proceed to the Disk Partitioning Setup screen. 9. Select the Have the installer automatically partition for you radio button. Select the next button to proceed to the Auto Partitioning screen. 10. Select the Remove all partitions on this system radio button. Uncheck the Review Results check box. Select next button to proceed to the warning prompt. 11. At the warning prompt click Yes, this will take you to the Boot Loader Configuration screen. 12. On the Boot Loader Configuration accept the defaults and select the next button to proceed to the Boot Loader Password Configuration screen. 13. Check the Use a Grub Password? Checkbox. Enter your password in each twice once in the password box and once in the confirm box. I suggest using mixed case numbers and special characters with a minimum length of eight characters. Select the next button to proceed to the Network Configuration Screen. 14. Configure your network card(s) as suited for your network, I can’t really help you here if you don’t know what to input into these boxes contact somebody on your network team to get these settings. Once you have configured your network card select the next button to proceed to the Firewall Configuration Screen. 15. Select the No Firewall radio button. If you put a firewall in place snort will not be able to see the traffic from anything that you block. Select the next button to proceed to the Additional Language Support screen. 16. Select any additional languages you may need and select the next button to proceed to the Time Zone Selection Screen. 17. Select your time zone by selecting it out of the list or by clicking on the point in your region on the map that represents your time zone. Select the **next** button to proceed to the Account Configuration Screen. 18. Enter your root account password once into the **Root Password:** box and once into the **Confirm** box. Once again use mixed case letters special characters and numbers. Make the password longer than eight characters. Select the **next** button to proceed to the authentication configuration screen. 19. Select the defaults and then select the **next** button to proceed to the Package Group Selection screen. 20. Uncheck all of the check boxes that are checked by default. Check the **Select individual packages** check box and select the **next** button to proceed to the Individual Package Selection screen. 21. Select the Flat View radio button. Select the Check box next to following items: - autoconf - automake - binutils - cpp - freetype - ftp - gcc - gcc-c++ - gcc3 - gcc3-c++ - gd - glibc-devel - kernal-headers - libgcc - libjpeg - libcap - libpng - libstdc++-devel - libstdc++3-devel - linuxconf - lynx - m4 - make - mysqlclient9 - openssh - opensshserver - perl - wget Uncheck the checkbox next to **sendmail**, it should be toward the bottom of the list. Select the next button to proceed to the Unresolved Dependencies screen. Accept the default setting on this page and select the next button to proceed to the About to Install screen. 22. Accept the default settings on this page and select the next button to proceed to the installing packages screen. 23. Red Hat will automatically begin installing you can probably go smoke a cigarette or grab a cup of coffee and come back…………….. Great you’ve returned and the Red Hat installation should prompt you for disc 2. Insert disc 2 and select the Ok button. It should starting installing packages automatically from disc 2. Once Red Hat is done copying files it will automatically take you to the Boot disk Creation screen. 24. It is optional to you whether or not you want to create a boot disk but it is always a good idea. Check or uncheck the check box as you see fit and then select the next button to proceed to the Congratulations screen. 25. Select Exit and Red Hat will reboot eject the cd-rom and bring up a logon prompt. Installing Snort 1.8.4 Download the following RPM’s by using wget which I will explain in a minute or by using a different computer and downloading the the RPM’s and burning them to CD or copying them to floppies. http://www.snort.org/dl/binaries/RPMS/snort-1.8.4-1snort.i386.rpm http://www.snort.org/dl/binaries/RPMS/libnet-1.0.2a-1snort.i386.rpm http://www.snort.org/dl/binaries/RPMS/snort-mysql+flexresp-1.8.4-1snort.i386.rpm http://mysql.orst.edu/Downloads/MySQL-3.23/MySQL-shared-3.23.49a-1.i386.rpm http://mysql.orst.edu/Downloads/MySQL-3.23/MySQL-devel-3.23.49a-1.i386.rpm http://mysql.orst.edu/Downloads/MySQL-3.23/MySQL-client-3.23.49a-1.i386.rpm 1. Log into the sensor as user root and when prompted input the password that you assigned to the root user. You should now have a shell prompt, if you inputted your hostname into the network configuration screen your prompt will be [username@hostname homedir]# enter the following commands: mkdir /snort-install cd /snort-install 2. Now we need to get those RPM packages that we downloaded copied to the snort-install directory. To use wget on your sensor that has access to the internet enter the following commands wget http://www.snort.org/dl/binaries/RPMS/snort-1.8.4-1snort.i386.rpm wget http://www.snort.org/dl/binaries/RPMS/libnet-1.0.2a-1snort.i386.rpm wget http://www.snort.org/dl/binaries/RPMS/snort-mysql+flexresp-1.8.4-1snort.i386.rpm wget http://mysql.orst.edu/Downloads/MySQL-3.23/MySQL-shared-3.23.49a-1.i386.rpm wget http://mysql.orst.edu/Downloads/MySQL-3.23/MySQL-devel-3.23.49a-1.i386.rpm wget http://mysql.orst.edu/Downloads/MySQL-3.23/MySQL-client-3.23.49a-1.i386.rpm To copy from a floppy drive format the floppy disks and copy the RPMS to the disks. enter the following command mount /dev/fd# where # is the number of your floppy drive. In Linux almost all things start at zero so for most systems with only one floppy drive type the following mount /dev/fd0 cp /mnt/floppy/* /snort-install umount /dev/fd0 insert your second floppy so on and so on. Enter the following commands again. mount /dev/fd0 cp /mnt/floppy/* /snort-install umount /dev/fd0 To copy from a cd-rom burn the RPMS to a cd and enter the following commands mount /dev/cdrom cp /mnt/cdrom/* /snort-install umount /dev/cdrom 3. Double check and make sure that all of your RPM packages are in the /snort-install directory by doing the following cd /snort-install ls you should see the following: libnet-1.0.2a-1snort.i386.rpm libpcap-0.6.2-9.i386.rpm MySQL-shared-3.23.49a-1.i386.rpm snort-1.8.4-1snort.i386.rpm snort-mysql+flexresp-1.8.4-1snort.i386.rpm MySQL-devel-3.23.49a-1.i386.rpm MySQL-client-3.23.49a-1.i386.rpm 4. To install the packages type the following commands: rpm –v –i /snort-install/libnet-1.0.2a-1snort.i386.rpm rpm –v –i /snort-install/MySQL-shared-3.23.49a-1.i386.rpm rpm –v –i /snort-install/MySQL-client-3.23.49a-1.i386.rpm rpm –v –i /snort-install/MySQL-devel-3.23.49a-1.i386.rpm rpm –v –i /snort-install/snort-1.8.4-1snort.i386.rpm rpm –v –i /snort-install/snort-mysql+flexresp-1.8.4-1snort.i386.rpm 5. Run the following commands to get the snort daemon to start automatically vi /etc/rc.d/init.d/snortd press the <insert> key find the lines that read daemon /usr/sbin/snort -A fast -b -l /var/log/snort -d -D \ -I $INTERFACE -c /etc/snort/snort.conf note: If you are using two network cards you must configure the $INTERFACE variable to reflect the interface that is plugged into a promiscuous port on your switch. change these lines to be ``` daemon /usr/sbin/snort-mysql+flexresp -D -i $INTERFACE -c /etc/snort/snort.conf ``` next find the line that reads ``` killproc snort ``` change this to be ``` killproc snort-mysql+flexresp ``` finally change the line that reads ``` status snort ``` change this to be ``` status snort-mysql+flexresp ``` press the <Esc> key press the <Shift> plus <;> keys press the <w> key press the <q> key this should take you back to your shell prompt. Type the following to test your Snort Daemon. ``` service snortd start ``` Should return ``` Starting snort: [ OK ] ``` ``` service snortd status ``` Should return something like snort-mysql+flexresp (pid 959) is running... ``` service snortd stop ``` Should return ``` Stopping snort: [ OK ] ``` Now we are going to setup snort to run at startup by typing in the following commands ``` cd /etc/rc3.d ln -s /etc/rc.d/init.d/snortd S40snortd ``` reboot your sensor by typing in the following command shutdown –r now Look for the following line during the initialization of Linux Starting snort: [ OK ] You can check any alerts that snort is logging by typing in the following commands: vi /var/log/snort/alert you should get screen with your alerts you can use the up and down arrow buttons or the <page up>, <page down> buttons to maneuver through the document. To exit the document type the following commands: <Ctrl>plus<q> this should take you back to your shell prompt. Type the following to log out of the console: exit Viola you are snorting!!!! Now to tune your sensor to watch your network, get rid of pesky port scan messages and to get it log to a MySQL Database. The Management Workstation: Create a directory on your local hard drive c:\snortM Download the following items to a folder on your local hard drive for the rest of the paper we are going to assume that you downloaded everything into c:\snortM Win Zip Putty http://the.earth.li/~sgtatham/putty/latest/x86/putty.exe IDS Policy Manager http://www.activeworx.com/downloads/IDSPolMan-1.2.full.zip MySQL Server http://mysql.orst.edu/Downloads/MySQL-3.23/mysql-3.23.49-win.zip PHP Lot http://www.silicondefense.com/software/snort-win32/binaries/phplot-4.4.6.zip PHP ACID http://www.silicondefense.com/software/snort-win32/binaries/acid-0.9.6b20.zip ADODB http://www.silicondefense.com/software/snort-win32/binaries/adodb172.zip Snort 1.8.4 tar http://www.snort.org/dl/snort-1.8.4.tar.gz Install WinZip Install WinZip by running c:\snortM\winzip81.exe accepting all of the defaults Install Activeworx IDS Policy Manager Extract all of the files in c:\snortM\IDSPolMan-1.2.full.zip by double-clicking on the zip file and extracting all files to a directory let’s say c:\snortM\IDSpol Navigate to c:\snortM\IDSpol and run IDSPMFULL1.2 to setup your IDS Policy Manager. 1. On the Screen above Select the Next button 2. On the Screen below press the browse button to change to default path that IDS policy Manager installs in. The space in the path name i.e. “c:\program files” causes problems with secure copy which IDS policy Manager uses to upload files to your sensor. Choose something like C:\Activeworx 3. When you hit the Browse button you should get this change your path and hit the OK button. When you return to the Choose Destination Location Screen press the Next button. 4. Accept the defaults on the screen below so just hit the next button. 5. Once Again just hit the next button on the screen below. ![Start Installation](image) Setup will install the VB6 runtimes and IDS Policy Manager and exit quietly. 6. Navigate to the directory where you installed IDS Policy Manager and create directories for each one of your sensors. i.e. make directories C:\Activeworks\Sensor1 C:\Activeworks\Sensor2 Go into the Official Folder in the directory where you installed IDS Policy Manager and select all of its contents and copy them into your Sensor directories. 7. Now go to start → programs → Activeworks → IDS Policy Manager You should get something like the screen below I said yes it will take you to the website when new versions are available ![Check for Updates](image) Now you should have a screen like the one below. Select the Policy Manager tab, now select Policy → Add Policy 8. You should get a screen like the one below. Enter your policy name for your first sensor in this example the IDS/System Version is Snort 1.8.4 so leave the default setting there. In your Policy Location field enter C:\Activeworx\Sensor1\snort.conf remember this is the folder that we created to hold our first sensor’s configuration files. Press the Ok button and you should have a new entry in your window called Sensor1. Repeat the above steps for any other sensors you may have changing the Policy Name, Policy Location, and Description. 9. Now go back to the Sensor Manager screen by selecting the Sensor Manager tab. From the menu bar select Sensor \rightarrow Add Sensor. The box below should open up, Enter all of your information as follows about your sensor. Sensor Name: For this example it was Sensor1 IP Address of Sensor: In this example 192.168.1.222 (This will change depending on your network) IDS System: For this example it is Snort 1.8.4 (default) Policy: Sensor1 This the policy that we just created Upload Protocol scp (default) Port: 22 (default) This is part Secure Copy part of the SSH daemon package on our Sensor. Username: root Password: (The secure password that you entered when you installed Red Hat on the Sensor) Password(Confirm): (The secure password that you entered when you installed Red Hat on the Sensor) Upload Directory: /etc/snort (Remember this where we are reading the snort configuration file from. 10. When you press the Ok button you will be prompted with the following box. Telling you that you need to setup Secure Copy. Press the OK button to proceed. You should get the screen below as to whether or not you want to store the remote key in your cach press the <y> key and the <Enter> key. If all is well you should get the following screen. Press the OK button and this should take you back to the Sensor Manager Screen. We will come back and work on the settings in a minute next we are going to setup a MySQL Database for snort to log to. INSTALL MYSQL Extract all of the files in c:\snortM\mysql-3.23.49-win.zip by double-clicking on the zip file and extracting all files to a directory let’s say c:\snortM\sqlinst Navigate to c:\snortM\sqlinst and run Setup.exe to setup your MySQL Database. 1. On the Screen below Select Next button to proceed to another information screen. 2. On this information screen it tells you what you need to get mysql to run if you are not going to install it in c:\msql. Press the Next button to proceed to the Choose Destination Location Screen. 3. On the screen below select the default installation directory by selecting the Next button. ![Choose Destination Location](image) Setup will install MySQL Servers and Clients 3.23.49 in the following folder: To install to this folder, click Next. To install to a different folder, click Browse and select another folder. You can choose not to install MySQL Servers and Clients 3.23.49 by clicking Cancel to exit Setup. 4. On this screen select the default “Typical” installation by selecting the Next button. Setup will proceed to install MySQL. Setup Type Click the type of Setup you prefer, then click Next. - **Typical**: Program will be installed with the most common options. Recommended for most users. - **Compact**: Program will be installed with minimum required options. - **Custom**: You may choose the options you want to install. Recommended for advanced users. 5. Once completed you should get the screen below. Press the Finish button to exit setup ![Setup Complete] Setup has finished installing MySQL Servers and Clients 3.23.49 on your computer. Setup can launch the Read Me file and MySQL Servers and Clients 3.23.49. Choose the options you want below. Click Finish to complete Setup. 6. Navigate to c:\mysql\bin\winmysqladmin.exe alternate click and hold navigate to “send to” select “desktop as shortcut.” Go to your desktop and double click on the shortcut to winmysqladmin.exe to get the box below. Enter the username: “root” password: “****” where **** is the password that you want. And then press the OK button. ****Most of The Information below on configuring IIS, MySQL, and Acid have been taken from papers written by Michael Steele of Silicon Defense only slightly modified for this paper. [http://www.silicondefense.com/techsupport/windows.htm](http://www.silicondefense.com/techsupport/windows.htm) **** 7. Alternate click on the stop light in the taskbar on the lower right side of your screen. And select Show Me from the menu. You should get the screen below. 8. Select the databases tab to bring up the databases screen. 9. Select the line that has your computer name on it and alternate click and select create new database. Name your database snort. Select create the database. If all goes well you should see the following box. Select the Ok button to proceed. Press the Cancel button to get out of the Adding Database box. 10. Extract the create_mysql script from snort-1.8.4.tar using WinZip into c:\snortM\snortdb Uncheck the use folder names checkbox if checked. 11. Open a shell window by going to Start→Run→cmd.exe Enter the following commands in your shell window: ```bash cd c:\mysql\bin MySQL grant INSERT,SELECT,CREATE,DELETE on snort.* to snort; grant INSERT,SELECT,CREATE,DELETE on snort.* to sensor1@192.168.1.222; ``` (change the above to match your network environment where sensor1 is your sensor name and 192.168.1.222 is the address of your sensor. You will have to create a user for every sensor you create.) ```bash exit MySQL -u snort snort < c:\SnortM\snortdb\create_mysql Exit (To exit the shell window) ``` ### Install IIS Insert your windows2000 cd into your cd-rom 1. Go to Start→Settings→Control Panel→Add/Remove programs 2. Select the Add/Remove Windows Components button. 3. Highlight Internet Information Services and Select the Details button. 4. Select the button that says Front Page Server Extensions 2000(This will install all other needed components) 5. Select the Ok button 6. Select Next The System will begin copying files from the CD Once finished you will get the following screen, 7. Select the Finish button to exit the installation. 8. Patch IIS Microsoft Windows Update Web page supports updates for IIS you can access this by opening an Internet Explorer window and selecting Tools→Windows Update from the menu bar. Or type the following into your address bar http://windowsupdate.microsoft.com. Install all patches under critical updates, this may take multiple reboots. **Install PHP** Navigate to the directory c:\snortM\phpinst\ and run php-4.2.0-installer.exe. 1. On the screen below select the next button to proceed to the License Agreement Screen. 2. On the Screen Below Select the I Agree button to proceed to the Installation Type screen. The PHP License, version 2.02 Copyright (c) 1999 - 2002 The PHP Group. All rights reserved. Redistribution and use in source and binary forms, with or without modification, is permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright NOTICE: By clicking 'I Agree' below, you agree to be bound by all the terms and conditions of the above License Agreement. Carefully read the License Agreement before accepting. If you do not agree with any of the terms and conditions, click 'Cancel' to cancel the setup process. 3. On the Screen below Select the Advanced radio button. Select the Next button to proceed to the Choose Destination Location screen. 4. On the screen below accept the defaults and press the next button to proceed to the Backup Replaced Files screen. 5. On the screen below accept the defaults and select the next button to proceed to the Choose Upload Temporary Directory. 6. On the screen below Select the next button to proceed to the Mail Configuration screen. 7. On the screen below press the next button to proceed to the Choose Session Save Directory. 8. On the screen below select the next button to proceed to Error Reporting Level screen. 9. On the screen below select the next button to proceed to the Server Type screen. ![Server Type Screen] 10. On the Screen below select the Microsoft IIS 4 or higher radio button and select the next button to proceed to the File Extensions screen. ![File Extensions Screen] 11. On the screen below select all three check boxes and select the next button to proceed the Start Installation screen. ![File Extensions](image) 12. On the screen below select the next button to start installation. ![Start Installation](image) Setup will prompt with the box below select the Yes button. ![Existing php.ini file found](image) Select the Ok button to proceed to IIS Script Node Selection. Select the WWW Service Master Properties checkbox and select the OK button to proceed to the Installation complete screen. 13. Press the OK button to exit setup. Install Acid and Configure its Components Navigate to c:\snortM and Extract the following with the use folder names option checked in WinZip. adodb172.zip to C:\snortM\ADODB acid-0.9.6b20.zip to c:\\inetpub\wwwroot\Location of your root web folder phplot-4.4.6.zip to c:\snortM\inetc 1. Navigate to c:\snortM\ADODB and open adodb.inc.php in WordPad. Change the line: $ADODB_Database = ""; To read $ADODB_Database = 'C:\snortM\adodb'; 2. Navigate to c:\\inetpub\wwwroot\acid and open acid_conf.php with WordPad. Change the line that reads $DBlib_path = ""; To read $DBlib_path = "C:\snortM\adodb"; Change the lines that read (alert_dbname = "snort_log"; (alert_host = "localhost"; (alert_port = ""; (alert_user = "root"; (alert_password = "mypassword"; */ Archive DB connection parameters */ $archive_dbname = "snort_archive"; $archive_host = "localhost"; $archive_port = ""; $archive_user = "root"; $archive_password = "mypassword"; To Read $alert_dbname = "snort"; $alert_host = "localhost"; $alert_port = ""; $alert_user = "snort"; $alert_password = ""; /* Archive DB connection parameters */ $archive_dbname = "snort"; $archive_host = "localhost"; $archive_port = ""; $archive_user = "snort"; $archive_password = ""; and then change $ChartLib_path = ""; To read $ChartLib_path = "c:\snortM\phpplot"; Reboot your management workstation now. **Configuring Your Sensor** Once your management workstation has rebooted open IDS Policy Manager by going to Start à programs à IDS Policy Manager. 1. Select the Policy Manager Tab and Highlight the Sensor1 entry in the Policy Manager screen and press <Ctrl>plus<O> to bring up the Policy Editor. You should have the screen below. Select the Settings tab to pull up the settings screen. 2. On the Settings screen under the variables tab we are going to change the variables to reflect our network. $HOME_NET is the network that we wish to monitor. I usually choose the network that I’m on for example if we are on a 192.168.1. network with a net mask of 255.255.255.0 then we are using a 24 bit mask so our HOME_NET entry would be like this. Once completed uncheck the default entry HOME_NET = any. 3. Next input your HTTP servers. I have two of them so my entry would look something like this. Once again Since these computers have a mask of 255.255.255.255 they have a 32-bit mask hence the /32. Using this logic do the Same for the SQL_SERVERS and DNS_SERVERS variables. 4. Edit the RULE_PATH variable to be /etc/snort remember this is the directory on the sensor where the snort daemon is pulling it's configuration from. 5. In the Left Hand Side of the screen select the Logging button and check the Database checkbox. Enter the name of your sensor in the Sensor Name field. Enter the name of the MySQL database we created which in this case was snort, into the DB Name field. Select the mysql entry out of the DB Type drop down menu. Select the log entry entry out of the Log Rule Type drop down menu. Enter the user that we created for this sensor into the User field. In mysql it adds @ip address you connect from to your username so we just need to put in sensor1 MySQL will see it as sensor1@192.168.1.222 In this case we have not yet set a password on our `sensor1@192.168.1.1` so leave the User Pass: field blank. The DB Host is the address of our management system in my case it is 192.168.1.108 this will change based on your environment. In the DB port field enter 3306 this is the default port that MySQL listens on. Once finished select file→Save and Exit 6. This will take you back to the Policy Manager window. Select the Sensor Manager tab, Highlight your sensor (in my case Sensor1) and press the <Ctrl>plus<P> keys. This will upload your policy to your sensor. 7. Now we need to test our configuration. Navigate to `c:\snortM` and select putty.exe, you should get the screen below. Enter the IP address of your sensor and select the SSH radio button and select the Open button. 8. When we setup SCP in IDS Policy it should have cached the servers host key as SCP is part of SSH. If for whatever reason the key is not in your local cache you will receive the prompt below. Select the Yes button. 9. You should get a screen like the one below, log in as root with your secure password. Now we are going to test are snort configuration by entering the following commands ``` snort-mysql+flexresp -v -c /etc/snort/snort.conf ``` The –v means verbose mode and the –c is the switch to tell the snort executable which configuration file to use. 10. Snort will give you a short summary of the options that you have enabled and then it should start showing you the traffic that is passing over its interface. Press the <Ctrl> plus <C> keys to kill snort. If you don’t get a screen with traffic passing over it double check the configuration settings in IDS policy manger. 11. Great you snort is working with configuration file that we have produced. Now restart the snortd service by typing the following into your putty session. service snortd stop service snortd start **Viewing Alerts with ACID** Open a Internet Explorer browser window and type the following into address bar: [http://127.0.0.1/acid/index.html](http://127.0.0.1/acid/index.html) You should get a screen where Acid tells you that it has an error. 1. Select the go to Setup Page link. 2. Select Create ACID AG. 3. When this is complete return to the address above you should get the Acid main screen. Select the number next to the The Total Number of Alerts line. Now you should something like the screen below. If nothing is there don’t worry it will probably take some time to see your first alert. Congratulations you have just made yourself a snort sensor. Conclusion Snort provides small and enterprise environments alike a robust and reliable Intrusion Detection System. While I gave you an example of the basic setup of snort there is still a great deal of tuning that needs to be done at this point. More than likely you will need to use IDS policy manger to enable disable certain signatures to suit your environment. For more information on rules consult the Snort Users Manual included in the Snort-1.8.4.tar.gz or on the snort website at http://www.snort.org/docs/writing_rules/. For all of you who work in a pure win32 environment have no fear. Silicon Defense updates a binary of snort ported for the win32 Operating Systems. They Also have extensive documentation on the Setup of snort on win32. http://www.siliconedefense.com The Sensor that we created is relatively secure due to the fact the we stripped down the normally bloated install of Red Hat. The packages that we installed need to be updated due to potential exploits in them. There is documentation on how to do this included in the Center For Internet Security Linux Benchmark v1.0.0. This can be obtained from http://www.cisecurity.org. For future updates of Snort I have included all that is needed to compile the latest version of snort in its binary format. At the time I completed this writing they had released a binary only version of snort 1.8.6. Out of the box the binary does not support MySQL to upgrade to the most recent version of snort you would do the following on your snort sensor with internet access. ``` cd /snort-install wget http://www.snort.org/dl/snort-1.8.6.tar.gz tar -xzvf snort-1.8.6.tar.gz cd /snort-install/snort-1.8.6.tar.gz ./configure --with-mysql make make install ``` this will place the snort binary into /usr/local/bin change your snortd script accordingly. Also depending on what rules you have enabled you may need to update to the new rule set. If you download snortrules.tar.gz from http://www.snort.org/dl/signatures/snortrules.tar.gz you can extract these files into a new directory in your Activeworx folder and create a new policy by pointing IDS policy manager to the updated snort.conf. I hope that this paper was both helpful and informative and provides the reader with enough information to build a IDS sensor on the cheap. **Technical Reviewers** Dave Snell MCSE, CNE Enterprise Network Administrator City of Kansas City, Missouri Rochelle Richeson Systems Analyst City of Kansas City, Missouri David Evans Enterprise Network Administrator City of Kansas City, Missouri **Resources** Snort Users Manual available from: http://www.snort.org/docs/writing_rules/ ActiveWorx FAQ available from: http://www.activeworx.com/idspm/faq.htm Snort 1.8.6b105 RELEASE running IIS / MySQL Acid... (Michael Steel) available from http://www.silicondefense.com/techsupport/winsnortacid-iis_1.8.6.htm MySQL Manual available from: Bad Packets: Snort -- the Dobermans behind the firewall (Wes Simonds) available from: http://searchnetworking.techtarget.com/originalContent/0,289142,sid7_gci804882,00.html Intrusion Detection Systems (IDSs): Perspective (The Gartner Group) available from: Network Intrusion Detection Using Snort (Dave Wreski & Christopher Pallack) available from: PHP : Manual : FAQ available from: The Putty User Manual available from: http://the.earth.li/~sgtatham/putty/0.52/htmldoc/ Center For Internet Security Linux Benchmark v1.0.0. Available from: http://www.cisecurity.org # Upcoming SANS Training Click here to view a list of all SANS Courses <table> <thead> <tr> <th>SANS Event</th> <th>Location</th> <th>Dates</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>SANS Las Vegas Summer 2020</td> <td>Las Vegas, NVUS</td> <td>Jun 08, 2020 - Jun 13, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Chicago Spring 2020</td> <td>OnlineILUS</td> <td>Jun 01, 2020 - Jun 06, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Books &amp; MP3s OnlyUS</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table>
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/detection/practical-guide-running-snort-red-hat-linux-72-management-ids-policy-manger-mysql-360", "len_cl100k_base": 8599, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 85644, "total-output-tokens": 11065, "length": "2e13", "weborganizer": {"__label__adult": 0.0006513595581054688, "__label__art_design": 0.0006093978881835938, "__label__crime_law": 0.006435394287109375, "__label__education_jobs": 0.00499725341796875, "__label__entertainment": 0.000362396240234375, "__label__fashion_beauty": 0.00022971630096435547, "__label__finance_business": 0.0016469955444335938, "__label__food_dining": 0.00027871131896972656, "__label__games": 0.0017786026000976562, "__label__hardware": 0.006069183349609375, "__label__health": 0.0005140304565429688, "__label__history": 0.00048661231994628906, "__label__home_hobbies": 0.0002865791320800781, "__label__industrial": 0.0014400482177734375, "__label__literature": 0.00038313865661621094, "__label__politics": 0.0008029937744140625, "__label__religion": 0.0005202293395996094, "__label__science_tech": 0.2142333984375, "__label__social_life": 0.00042819976806640625, "__label__software": 0.3642578125, "__label__software_dev": 0.392333984375, "__label__sports_fitness": 0.0003972053527832031, "__label__transportation": 0.0005574226379394531, "__label__travel": 0.00022172927856445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36357, 0.02639]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36357, 0.06721]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36357, 0.82845]], "google_gemma-3-12b-it_contains_pii": [[0, 118, false], [118, 327, null], [327, 3045, null], [3045, 5883, null], [5883, 7342, null], [7342, 8300, null], [8300, 10414, null], [10414, 11684, null], [11684, 12833, null], [12833, 13517, null], [13517, 14828, null], [14828, 15167, null], [15167, 15415, null], [15415, 16273, null], [16273, 16817, null], [16817, 17722, null], [17722, 18273, null], [18273, 18816, null], [18816, 19369, null], [19369, 19700, null], [19700, 20667, null], [20667, 20826, null], [20826, 21197, null], [21197, 22464, null], [22464, 23086, null], [23086, 23669, null], [23669, 23921, null], [23921, 24136, null], [24136, 24321, null], [24321, 24599, null], [24599, 24949, null], [24949, 25175, null], [25175, 26116, null], [26116, 26984, null], [26984, 27396, null], [27396, 28431, null], [28431, 29220, null], [29220, 29783, null], [29783, 30920, null], [30920, 32139, null], [32139, 33971, null], [33971, 34840, null], [34840, 36357, null]], "google_gemma-3-12b-it_is_public_document": [[0, 118, true], [118, 327, null], [327, 3045, null], [3045, 5883, null], [5883, 7342, null], [7342, 8300, null], [8300, 10414, null], [10414, 11684, null], [11684, 12833, null], [12833, 13517, null], [13517, 14828, null], [14828, 15167, null], [15167, 15415, null], [15415, 16273, null], [16273, 16817, null], [16817, 17722, null], [17722, 18273, null], [18273, 18816, null], [18816, 19369, null], [19369, 19700, null], [19700, 20667, null], [20667, 20826, null], [20826, 21197, null], [21197, 22464, null], [22464, 23086, null], [23086, 23669, null], [23669, 23921, null], [23921, 24136, null], [24136, 24321, null], [24321, 24599, null], [24599, 24949, null], [24949, 25175, null], [25175, 26116, null], [26116, 26984, null], [26984, 27396, null], [27396, 28431, null], [28431, 29220, null], [29220, 29783, null], [29783, 30920, null], [30920, 32139, null], [32139, 33971, null], [33971, 34840, null], [34840, 36357, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36357, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36357, null]], "pdf_page_numbers": [[0, 118, 1], [118, 327, 2], [327, 3045, 3], [3045, 5883, 4], [5883, 7342, 5], [7342, 8300, 6], [8300, 10414, 7], [10414, 11684, 8], [11684, 12833, 9], [12833, 13517, 10], [13517, 14828, 11], [14828, 15167, 12], [15167, 15415, 13], [15415, 16273, 14], [16273, 16817, 15], [16817, 17722, 16], [17722, 18273, 17], [18273, 18816, 18], [18816, 19369, 19], [19369, 19700, 20], [19700, 20667, 21], [20667, 20826, 22], [20826, 21197, 23], [21197, 22464, 24], [22464, 23086, 25], [23086, 23669, 26], [23669, 23921, 27], [23921, 24136, 28], [24136, 24321, 29], [24321, 24599, 30], [24599, 24949, 31], [24949, 25175, 32], [25175, 26116, 33], [26116, 26984, 34], [26984, 27396, 35], [27396, 28431, 36], [28431, 29220, 37], [29220, 29783, 38], [29783, 30920, 39], [30920, 32139, 40], [32139, 33971, 41], [33971, 34840, 42], [34840, 36357, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36357, 0.02725]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a39f63dd3c2c7e12c6d9c2c800817d6b89d98057
1. Order Inversion Design Pattern • In contrast to traditional parallel programming models, MapReduce does not provide explicit synchronization primitives to the programmer. – Examples for synchronization primitives are locks, barriers, and (synchronous) message sending and receiving. (These will be discussed more in a future module.) • Explicitly managing synchronization in a distributed system is generally considered complex and challenging: Whenever multiple processes interact in modifying data concurrently, careful coordination is required to (1) guarantee correctness under all circumstances and (2) achieve good performance. Freeing the programmer from worrying about these low-level details is one of the strengths of MapReduce, and a reason for its popularity. • However, in the hands of an experienced expert, explicit synchronization can be a powerful tool for creating efficient parallel programs. • Let’s see how we can perform some form of synchronization in MapReduce. Doing so requires a clever use of Map, Reduce, Partitioner, and key ordering. 1.1 Example • The need for explicit synchronization will be illustrated using an example from statistical data analysis and data mining: Consider a crowd-sourcing protocol where citizen scientists report bird species they observe. For the sake of simplicity, assume participants report a (species, color) record every time they see a bird. • Given a large collection of such records, our goal is for each species and color to estimate the probability of the color for that species. More formally, the goal is to estimate the conditional probability of a color, given a species. – For example, mathematically the probability of the Northern Cardinal (N.C.) being red is defined as \( P(\text{color} = \text{red} \mid \text{species} = \text{N.C.}) \). • We can estimate such probabilities from big data by counting the appropriate quantities. In particular, to estimate \( P(\text{color} = C \mid \text{species} = S) \), we need the following: – Frequency count \( f(S) \), i.e., the number of records that are observations for species \( S \). In statistical terms, this is called a marginal. – Frequency count \( f(S, C) \), i.e., the number of records that are observations matching both species \( S \) and color \( C \). In statistical terms, this is called a joint event. – In the example, we estimate \( P(\text{color} = \text{red} \mid \text{species} = \text{N.C.}) \) by dividing the number of red Northern Cardinal observations by the total number of Northern Cardinal observations (including all colors!), i.e., as \( \frac{f(\text{N.C., red})}{f(\text{N.C.})} \) • This analysis is an example for estimating relative frequencies, a common data mining task. Another example problem with the same structure is to compute the normalized word co-occurrence vector for each word in a document collection. Intuitively it measures for each word, which other words occur frequently near it. 1.2 Obvious Solution Using “Stripes” - Notice that both frequency counts, \( f(S) \) and \( f(S, C) \), count per species. Hence the species presents itself as an obvious choice for intermediate key. - Starting with this observation about the key, the entire MapReduce program just “falls into place.” For each input record (species \( S \), color \( C \)), Map simply emits the record with the species \( S \) as the key, and the color \( C \) as the value. - To enable a Combiner or in-mapper combining, Map could instead output \( (C, 1) \) as the value. The combining approach could then aggregate these counts. - The Reduce call for species \( S \) counts the number of occurrences of each color to get \( f(S, C) \) for each \( C \). At the same time, it can also keep track of the marginal \( f(S) \) for the species. ```java // Note: If no combining approach is used, // Map could simply emit (S, C), i.e., // S as the key and C as the value. map( ..., observation: (species S, color C) ) emit( S, (C, 1) ) ``` ```java reduce( S , [(C1, n1), (C2, n2),...] ) { // H maps a color to a count init hashMap H marginal = 0 for all (C, n) in input list do { H[C] += n marginal += n } for all C in H do emit( (S, C), H[C] / marginal ) } ``` 1.2.1 Discussion of the Stripe-Based Approach • Why do we call this a “stripe”-based approach? Think of a table where each row is indexed by a species and each column is indexed by a color. A cell in this table, indexed by the combination of a species S and color C, contains the count f(S, C). By choosing the species as the key, Reduce works with an entire row of this table, which pictorially looks like a stripe. (We will discuss this and other partitioning options of a multidimensional space in more detail in a future module.) • The Stripe, i.e., table row, turns out to be a great fit for relative frequency computation. Each cell in the Stripe has the color frequency for the species and the sum of these frequencies equals the total for the species. More precisely, all the data needed for computing f(S, C)/f(S) for species S is in the corresponding stripe. And the stripe contains no additional irrelevant data. • So, is there a drawback to this approach? There are indeed two major limitations: – First, what if data structure H in Reduce exceeds the size of the available memory? This would not happen for the color example, but it might for other problems where the table has many more columns. – Second, the granularity of the Reducer workload is limited by the number of different species. What if we have more machines than species? How can we keep all of them busy? 1.3 New Attempt Using “Pairs” • We could address the problems of the Stripe-based approach by splitting the Reducer work into smaller units. Unfortunately, any smaller unit would miss some of the joint events needed for computing $f(S)$, the marginal for the species. • To see how smaller units of work could become problematic, let’s consider a program that uses both species and color as the intermediate key. • This program is virtually identical to Word Count. For input record (species $S$, color $C$), Map emits $((S, C), 1)$. – Combiners and in-mapper combining can be applied here as well. • For key $(S, C)$, Reduce computes $f(S, C)$, the frequency count of that species-color combination. Like Word Count, virtually no memory is used and the fine granularity enables up to $\#\text{species} \times \#\text{colors}$ different Reduce tasks. • So, does this approach have any drawbacks? – Yes! It is not clear how to compute the marginal $f(S)$. A Reduce call that computes $f(S)$ would have to access $f(S, \text{color})$ for all colors for species $S$. – This could be addressed by running another simple MapReduce program to pre-compute $f(S)$. However, this approach seems wasteful for Big Data as it reads the input data set twice—once for computing the $f(S)$ and then again for computing the $f(S, C)$. • Can we get the best of both worlds, i.e., the one-pass efficiency of Stripes and the small memory footprint and finer work partitioning of Pairs? 1.3.1 Fixing the Pairs-Based Approach, Attempt 1 - Let’s try to fix the Pairs-based approach so that it can do all work in a single MapReduce job. - Notice that if for some species S the keys (S, C1) and (S, C2) are assigned to different Reducers, then no Reducer has access to all data needed for computing f(S). Hence we have to make sure that all keys (S, *) for species S end up in the same Reduce task. - We already know how to achieve this by defining a custom partitioning function that assigns a key (S, C) to a Reducer solely based on the S field, ignoring the value of the C field of the key. - While the custom Partitioner guarantees that all records for species S will be processed in the same reduce task, there still is a separate Reduce call for each species-color combination. - [Challenge question 1: How can we compute f(S) when each Reduce call only works with a single color for species S?] - [Challenge question 2: Does either of these approaches really give us a better solution than the Stripes approach?] Challenge Question Answers • Q1: Possible answer 1: The individual Reduce calls for keys (S, C1), (S, C2), (S, C3),... could be turned into a single Reduce call for species S by using a grouping comparator. (Review the secondary sort design pattern.) • Q1: Possible answer 2: State can be maintained across the different Reduce calls for keys (S, C1), (S, C2), (S, C3),... to keep track of f(S, C) for all colors C of species S. More precisely, both the marginal and a hashmap H that stores the frequency for each color could be defined at the Reducer class level, letting each Reduce call for (S, C) update them accordingly. (Review the in-mapper combining design pattern.) • Q2: No. Unfortunately these “improved” Pairs-based approaches have the same drawbacks as the Stripes-based approach. By using a custom Partitioner that only considers the species, Reduce task granularity is back at the species level, like for Stripes. Similarly, since f(S) is computed concurrently to the f(S, C) frequencies, all separate color frequencies for species S have to be kept until the last record for species S is processed. Hence the memory footprint is not smaller than for Stripes either. Essentially the attempt at improving the Pairs-based approach made it just “simulate” the Stripe-based approach. 1.3.2 Fixing the Pairs-Based Approach, Attempt 2 • The limitation of all attempts so far has been that they tried to compute \( f(S) \) for species \( S \) concurrently with the different \( f(S, C) \) for that same species. This forced us to (1) send all Map output records for species \( S \) to the same Reducer and (2) keep the counts for all \( f(S, C) \) around until the very last record for species \( S \) was processed by the Reducer, i.e., the moment \( f(S) \) would finally be known. • If the program had known \( f(S) \) from the beginning, then the Pair-based approach would be trivial as shown by the code below. Unfortunately, in reality \( f(S) \) is not magically known and needs to be computed. • It turns out that this can be done without a separate MapReduce pre-processing step. \[ \text{map}( \ldots, \text{observation}: (\text{species } S, \text{color } C) ) \\ \quad \text{emit}( (S, C), 1 ) \] \[ \text{reduce}( (S, C), [n1, n2,...] ) \{ \\ \quad \text{frequency} = 0 \\ \quad \text{for all } n \text{ in input list do} \\ \quad \quad \text{frequency} += n \\ \quad \// \text{Here we assume that } f(S) \text{ is} \\ \quad \// \text{“magically” known.} \\ \quad \text{emit}( (S, C), \text{frequency} / f(S) ) \\ \} \] 1.4 Solution Using Order Inversion - Consider the MapReduce program below. Like the Pairs-based approach, it uses both the species and the color for the intermediate key. In addition to the usual \(((S, C), 1)\), Map also emits another record \(((S, \text{dummy}), 1)\). The former will be used to compute \(f(S, C)\), while the latter will be used to compute \(f(S)\). By de-coupling these computations, MapReduce’s property of executing Reduce calls in key order can now be exploited to control the order in which the different frequencies are computed. In particular, by defining a key comparator that sorts color “dummy” before each real color \(C\), it is guaranteed that \(\text{reduce}((S, \text{dummy}),\ldots)\) would be executed before \(\text{reduce}((S, C),\ldots)\) for any other color \(C\) in the same Reduce task. - A custom partitioner that partitions only based on the species component, guarantees that all joint events for a species-color combination are processed in the same task as the marginal for that species. - The Reducer class needs a task-level variable, called marginal in the algorithm, to keep track of \(f(S)\). Since the Reduce call for the dummy color happens first, it is guaranteed that \(f(S)\) will be known before the following Reduce calls for \(f(S, C1), f(S, C2),\ldots\). Since the key comparator ensures sorting by species first, it is also guaranteed that the right marginal \(f(S)\) is available even if multiple species are assigned to the same Reduce task. - Notice that the Reduce task now has a small constant (i.e., independent of the number of species and colors) memory footprint. It only needs the marginal for a single species and a single counter for the currently processed species-color combination. Code for Order Inversion map( ..., observation: (species S, color C) ) { emit( (S, dummy), 1 ) emit( (S, C), 1 ) } Partitioner: partition by species Key comparator for (species, color): - Sort by species first - Make sure color “dummy” comes before all real colors Class Reducer { marginal reduce( (S, C), [n1, n2,...] ) { if C = dummy { // Compute marginal f(S) marginal = 0 for all n in input list do marginal += n } else { // Real color, hence compute f(S, C) colorCnt = 0 for all n in input list do colorCnt += n emit( (S, C), colorCnt / marginal ) } } } 1.4.1 Discussion • How does this new solution compare to the previous attempts? – Good: It reads the big input data only once, like the Stripes-based approach. – Good: It requires virtually no memory, like the simple initial Pairs-based approach. – Potentially bad: Map duplicates every input record, therefore in the worst case two copies of the big input data set are transferred from Mappers to Reducers. In practice, use of a Combiner or in-mapper combining can often dramatically reduce this data transfer. – Bad: Like the Stripes-based approach and the “fixed” single-phase Pairs-based approaches, Reduce granularity is potentially very coarse, because it is determined only by the species. • Interestingly, this approach supports a finer Reduce task granularity at the cost of greater data replication in the Map phase. The code below illustrates this idea. By producing \( k \) replicas in Map, each for a different dummy color, the keys \((S, C_1), (S, C_2), (S, C_3),\ldots\) can be distributed over \( k \) different Reduce tasks. The Partitioner simply has to ensure that each of these tasks receives one of the \((S, dummy_i)\) keys as well, so that it can compute the marginal. – For Big Data problems, this \( k \)-fold data duplication can result in poor performance unless combining is very effective. Code for K-fold Duplication Approach ```java map( ..., observation: (species S, color C) ) { emit( (S, dummy_1), 1 ) emit( (S, dummy_2), 1 ) ... emit( (S, dummy_k), 1 ) emit( (S, C), 1 ) } Class Reducer { marginal reduce( (S, C), [n1, n2,...] ) { if C = dummy { // Compute marginal f(S) marginal = 0 for all n in input list do marginal += n } else { // Real color, hence compute f(S, C) colorCnt = 0 for all n in input list do colorCnt += n emit( (S, C), colorCnt / marginal ) } } } ``` Partitioner: partition by species and color, distributing the colors for species S over k Reduce tasks and making sure each of these Reduce tasks receives exactly one of the dummy_i colors for that species. Key comparator for (species, color): - Sort by species first - Make sure color “dummy” comes before all real colors 1.5 Order Inversion Design Pattern Summary • This design pattern for controlling the order in which intermediate results are computed is called “order inversion.” [Challenge question: In what sense does it invert order?] – We can compute $f(S)$ as the sum of the $f(S, C)$, summing over all colors $C$. Hence the “natural” order of computation would be to obtain all the $f(S, C)$ for species $S$ first and then add them up to obtain $f(S)$. The order-inversion design pattern turns this around by first computing $f(S)$ and then the individual $f(S, C)$. • Without order inversion, the programmer would have to rely on either (1) larger and more complex data structures to bring the right data together (e.g., the hashmap structure $H$ for a Stripe) or (2) more MapReduce phases to compute the intermediate results in the appropriate phase. • Intuitively, order inversion turns a synchronization problem into an ordering problem. More precisely, MapReduce has no explicit synchronization primitives. Hence order inversion relies on the key sort order to enforce computation order. A custom Partitioner assigns the appropriate data to each Reduce task and the Reducer maintains small task-level state across Reduce invocations. • Advantages: – This approach enables the use of simpler data structures and less Reducer memory, without requiring additional MapReduce phases. – It also enables controlling the granularity of Reduce tasks at the cost of greater data replication. • Disadvantages: – Due to data replication in the Mappers, performance can suffer unless an effective Combiner or in-mapper combining significantly reduces data transfer from Mappers to Reducers. 2. Utilities - There are certain tasks that are almost always needed when analyzing data: - Sorting (discussed in a previous module) - Data partitioning - Grouping and aggregation - Sampling - This section introduces efficient MapReduce programs for these tasks. 2.1 Per-Record Computation - Per-record computation is a trivial problem for MapReduce, but very useful in many applications. For example: - The relational selection operator filters out records based on a user-defined selection predicate. For example, from a data set of flights it can select the flights originating from Boston. - The relational projection operator removes fields from a record. For example, from a flight data set it could remove the arrival time field if that is not needed for the analysis task. - The Unix grep command finds all lines in a (text) file that match a user-defined string search pattern such as “Northeastern.” - Invert all edges of a graph that is stored as a set of edges. More precisely, convert each edge (X, Y) into edge (Y, X). This is a common challenge in graph analysis. - Task: Transform each input record independently by applying some function F() to it. - Solution: A simple Map-only job can solve this task. The Map function performs the required computation on the input record and emits the result. - For operators that filter out records, e.g., the selection operator and grep, F(x) conceptually returns NULL for input records x that are filtered out. - Note that this program reads the entire input data set. If every input record has to be processed anyway, then this is the best one can do. However, for the selection operator, we are only interested in the matching records. Database systems provide index structures that can significantly reduce access cost for highly selective conditions, i.e., conditions that only select a very small fraction of the input. MapReduce does not have such indexes. In a future module we will discuss HBase, a distributed key-value storage system that can support index-like lookups in the MapReduce ecosystem. \[ \text{map( ..., x )} \\ \text{emit( NULL, F(x) )} \] Input: \[x0 \ x1 \ x2 \ x3 \ x4 \ x5 \ x6 \ x7 \ x8 \ x9\] Apply function F to each input record. Output: \[F(x0) \ F(x1) \ F(x2) \ F(x3) \ F(x4) \ F(x5) \ F(x6) \ F(x7) \ F(x8) \ F(x9)\] 2.2 Map-Only Data Partitioning - Consider data about product reviews by users of an online shopping site, stored as records with schema (userID, isPreferred, productID, productCategory, review). An analyst might be interested in exploring reviews by preferred users separately from those by regular users. Similarly, a product-centric exploration might require training of separate data mining models for the different product categories. In both examples the given data set has to be partitioned into separate sub-sets. - Task: Partition the input data set into p separate sub-sets based on properties of the input records. - Solution: A simple Map-only job can solve this task. It uses the MultipleOutputs class to write to p different output files. Note that each of these output files can store records of a different type, e.g., one might store text data, the other pairs of integer numbers. The Map function determines the partition the input record belongs to, then emits it to the appropriate output file using MultipleOutputs.write(). - Note that for m Map tasks and p desired output partitions, this program will actually generate m·p output files. (Each Map task writes to its own set of p output files!) If these files are too small, they can be concatenated easily using HDFS file system commands. - This approach can also be used for problems where some input records are assigned to zero or more than one of the p partitions. Input: Output: --- 2.3 Grouping and Aggregation - Data can be partitioned with a Map-only job only if the possible partitions are known in advance. When the partitions are not known or when one wants to compute an aggregate for each partition, Reduce is needed. This is a very common data analysis task, for example: - Word Count groups by the word string and computes the total count per word. - SQL's GROUP BY operator can be used to compute an aggregate such as the average delay for each airline. Flight data would be grouped by airline and then the average delay is computed for each group. GROUP BY does not need to know the airlines in advance, hence it would work correctly even if new airlines appear in the data. - An inverted index for a document collection returns the identifiers of all documents in the collection that contain a given search string. To create such an index for the World Wide Web, one has to do the following: for each word w on the Web, generate the list of URLs where w occurs. - For a graph given as a set of edges, generate the inverted graph stored as an adjacency list. Stated differently, for each node of the graph, generate the list of incoming edges. - Create a new object from the fields of multiple input objects. This can be useful when processing XML documents such that new documents are formed by combining tags from different input documents. - Task: Partition the input data set and compute an aggregate for each partition. - Solution: This task requires both Map and Reduce phase. - The Map function emits the input record, using the grouping attribute as the key and the record itself as the value. (Note that the grouping attribute could be removed from the value component as it is already contained in the key component.) - The Reduce function receives all records in the corresponding group and can then compute the desired aggregate function. Depending on the aggregate function, e.g., median, this computation might require multiple passes over the input list or a sufficiently large amount of heap space to load it into memory. - As discussed before, for distributive and algebraic aggregates one can use a Combiner or in-mapper combining. Examples for such aggregates are sum, count, average, minimum, maximum, and standard deviation. Combining is not applicable to holistic aggregates (e.g., median), except in the form of simple compression. (Recall that a Combiner could replace set {A, A, B, A, B} by the more compact {(A,3), (B,2)}.) - Consider the use of a custom Partitioner for load balancing purposes. - Consider secondary sort to simplify the Reduce computation, e.g., to guarantee an increasing order of values in the Reduce input list for finding the minimum. 2.3 Grouping and Aggregation map( ..., x ) emit( x.groupingAttribute, x ) reduce( groupID, [x1, x2,...] ) { agg = compute some aggregate over the values in the input list emit( groupID, agg ) } Input: Groups: Output: 2.4 Global Aggregation - Sometimes one would like to compute a single “global” aggregate for the entire input data set, e.g., the average flight delay over all flights. This can be done using the grouping-and-aggregation approach. [Challenge question: How exactly could that be done?] - Use of Combiner or in-mapper combining is essential for this approach, because all values are sent to a single Reducer. - Can one find an even more efficient approach? In particular, can a global aggregate be computed with a Map-only job? Based on what you have seen so far, this seems impossible, because whenever there are multiple Map tasks, none of them can compute the desired result, because neither sees the entire input. Hence Reduce is needed to aggregate across output from different Map tasks. - It turns out that simple aggregates, in particular sum and count, can be computed with a Map-only job by exploiting Hadoop’s global counter feature. It is available through the Counter class. Global counter variables can be defined in MapReduce user code and any task can increment them or set their value. No matter which task updates a counter, in the end it will reflect the updates performed by all completed tasks. - To compute the average flight delay with a Map-only job, define two counters: one to keep track of the sum of all flight delays, the other for the count. When processing a flight record, the Map function simply adds the delay to the first counter (for the sum) and value 1 to the second counter (for the count). The driver program can then read out the final counter values and return the desired average flight delay. - By using multiple global counters, one could also compute aggregates for multiple groups of input records. However, for this to work, the groups have to be known in advance. For instance, assuming all airlines are known in advance, one can define the sum and count counters for each of them. An input record would result in counter updates for the corresponding airline only. (If the groups are not known in advance, one has to fall back to the general grouping-and-aggregation program with Reduce phase.) - Note that due to their global nature, these counters are maintained by the JobTracker. Hence tasks that update a counter have to inform the JobTracker. This centralized maintenance of global counters usually does not create a bottleneck, because the update operations are lightweight. However, it is not recommended to use more than a few dozen or maybe hundreds of these counters as the overhead increases proportionally. 2.5 Duplicate Removal (DISTINCT) • Duplicate removal, expressed by the DISTINCT keyword in SQL, eliminates repeat occurrences of records in an input file. It is a special case of grouping-and-aggregation: identical records form groups and from each group exactly one representative is output. • Task: Eliminate all duplicates from the input. • Solution: Map emits the input record as the key. A Reduce call then receives all duplicates and only needs to output the key as the representative of the group. – A Combiner or in-mapper combining can be applied. – The standard MapReduce key comparator determines, which records will be identified as identical. map( ..., x ) emit( x, NULL ) reduce( x, [NULL, NULL,...] ) emit( x, NULL ) 2.6 Random Sampling - Random sampling’s importance for big data analysis cannot be overemphasized. When data is too big to handle with available computational resources, working with a random sample can reduce computational cost while often still producing a good approximation of the desired result. Small random samples of a big data set are also useful for testing and debugging of MapReduce programs. It would simply take too long to test and debug on the entire big input. - Task: Sample a fraction of approximately \( p \), \( 0.0 \leq p \leq 1.0 \), of the input records uniformly at random. - This means that each input record should have the same probability \( p \) of being selected for the output. - Solution: A Map-only job suffices for this task. The Map function uses a pseudorandom-number generator to determine if the input record will be emitted. For instance, if the generator produces a floating point number \( \text{rnd} \) in the range \( 0.0 \leq \text{rnd} < 1.0 \), then the input record is emitted if and only if \( \text{rnd} < p \). [Link: We need to be careful when using pseudorandom-number generators.] - Note that for a given input set of \( n \) records, this approach will not necessarily produce exactly \( p \cdot n \) output records. Due to randomness, it might produce more or fewer. In practice, when dealing with large numbers of records, the resulting sample size will be very close, almost always within 5% of the ideal number. - This approach will generate one output file per Map task. These files can be concatenated using “hadoop fs –cat.” Alternatively, a single identity Reducer could perform the concatenating by emitting each record with the same key “dummy” in Map. ```java map( ..., x ) { // Math.random() produces a number // \( \text{rnd} \) in the range \( 0.0 \leq \text{rnd} < 1.0 \) \( \text{rnd} = \text{Math.random()} \) if (\( \text{rnd} < p \)) emit( NULL, x ) } ``` Input: ``` [blue][blue][red][blue][red][blue][blue][red][blue] ``` Output: ``` [red][red][red][red] ``` Sampling rate \( p = 0.3 \) 2.7 Random Shuffling - Random shuffling, i.e., arranging records in a file in a random order, can be useful in many situations. After a file is randomly shuffled, each block will contain a random sample of records. Hence one can obtain a random sample of exactly $S$ records by simply reading the first $S$ records from the shuffled file. - Task: Randomly shuffle a given input file. Each input record should have the same probability of ending up in any position in the shuffled file. - Solution: For an input record, the Map function emits this record as the value, assigned to a key that is a random number. The Reduce function emits the values in its input list as they are read in order. - One can use a TotalOrderPartitioner to sort the file based on the random keys. - However, even the simple default hash Partitioner would suffice. Since the keys are random numbers, the hash Partitioner would assign an input record essentially to a random Reduce task. That task would output records in key order. Since each input record can still end up in any position with the same probability, there is no need for the more expensive (due to the quantile sampling step) and potentially less load-balanced (if quantiles are poorly approximated) TotalOrderPartitioner. ```java map( ..., x ) { // To minimize occurrence of duplicate keys, // chose the random number from a large // domain, e.g., floating point numbers // between 0.0 and 1.0. rnd = Math.random() emit( rnd, x ) } // If none of the randomly selected keys // are identical, then the input list has // only a single record. reduce( rnd, [x1, x2,...] ) { for each x in input list emit( NULL, x ) } ``` 2.8 Approximate Quantiles • Similar to minimum, maximum, and average, quantiles such as the median provide important information about a data distribution. For instance, the median housing price is a good indicator for distinguishing rich and poor neighborhoods. Furthermore, range-partitioning based on quantiles will result in balanced load distribution because by definition the number of records between consecutive quantiles is about the same. • As discussed in a previous module, exact quantiles can be found by sorting the data and then picking the records at the corresponding positions, e.g., the record in the middle for the median. In practice, approximate quantiles often suffice. The approach discussed here can find approximate quantiles in a single pass over the input data. • Task: Find approximate quantiles. • Solution: The main idea is to use the Map phase to create a random sample of the input that just fits into the memory of a single machine performing the Reduce work. The Reduce function loads the data into memory, sorts it in memory, and then picks the quantiles from the sorted sample. Map assigns the same dummy key to every emitted record. – Since there is only a single key (the dummy key), there is only a single Reduce function call. To avoid sorting in user code, one can use the secondary sort design pattern. • Note that if one uses secondary sort, the Reduce function does not even need to load all records from the input list into memory. If the number of records in the input list is known, it can simply scan through the (sorted!) list and pick the corresponding quantiles from the appropriate positions. If the number of records is not known, it can be determined by scanning the list. In neither case would the reduce function need more than a few bytes of memory to hold the currently read input record and maintain a counter for the position in the list. Records sampled by Map: Median selected from sample by Reduce: 2.8 Approximate Quantiles - Why not use a much smaller or larger sample? - The smaller the sample, the less data is available for determining the quantiles in the Reducer. This leads to poorer approximation quality. On the other hand, if sample size is so big that it exceeds the amount of memory on the Reducer, then the Reduce function would have to perform more costly local disk I/O. - Does this mean that we are now free to use a sample that exceeds the amount of memory in the Reducer? - Yes and no. Yes, we do not need to worry about an out-of-memory exception. On the other hand, there is no free lunch when dealing with bigger data. The MapReduce environment still has to transfer the larger sample from Mappers to Reducers and sort it by key. Hence a larger sample still results in higher cost. ```java map( ..., x ) { // Math.random() produces a number // rnd in the range 0.0 ≤ n < 1.0 rnd = Math.random() // Sampling rate p should be selected such // that p * totalInputSize is virtually guaranteed // to not exceed Reducer heap space. if (rnd < p) emit( dummy, x ) } ``` ```java reduce( dummy, [x1, x2,...] ) { copy all records from the input list into array A sort array A for each quantile position i emit( NULL, A[i] ) } ``` 2.9 Top-K Records • In addition to quantiles, an analyst can also gain valuable information about a big data set by exploring the K most important records, based on some notion of importance. In particular, it is often insightful to look at the top-K largest (or smallest) records based on some field or attribute of the records. Common examples are the most active users, people who spend the most money or time, or users with the most friendship links. • Task: Find the K records that have the largest value of some attribute of interest. • Solution: – One can sort the input data and then easily select the K largest records from the sorted file. This is often the most efficient method for very large K. – For smaller values of K, sorting can be avoided. The main idea is to scan the input only once and use in-mapper combining to keep track of the top-K records in each Map task. A single Reduce call receives these “local” top-K lists and merges them into the final result. It is guaranteed that this approach will find the exact global top-K, because if a record is in the global top-K, it has to be in the corresponding local top-K. Class Mapper { localTopK setup() { init localTopK } map(..., x) { if (x is in localTopK) // Adding x also evicts the now // (k+1)-st record from localTopK localTopK.add(x) } cleanup() { for each x in localTopK emit(dummy, x) } reduce(dummy, [x1, x2,...]) { init globalTopK for each record x in input list if (x is in globalTopK) // Adding x also evicts the now // (k+1)-st record from globalTopK globalTopK.add(x) for each record x in globalTopK emit(NULL, x) } }
{"Source-Url": "http://www.ccs.neu.edu/home/ntuck/courses/2016/09/cs6240/notes/Handouts/Module5-handout.pdf", "len_cl100k_base": 8397, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 47157, "total-output-tokens": 9668, "length": "2e13", "weborganizer": {"__label__adult": 0.0002028942108154297, "__label__art_design": 0.00020110607147216797, "__label__crime_law": 0.00021922588348388672, "__label__education_jobs": 0.0004086494445800781, "__label__entertainment": 3.9458274841308594e-05, "__label__fashion_beauty": 8.606910705566406e-05, "__label__finance_business": 0.00015664100646972656, "__label__food_dining": 0.00023615360260009768, "__label__games": 0.00035262107849121094, "__label__hardware": 0.0008444786071777344, "__label__health": 0.00022792816162109375, "__label__history": 0.00014913082122802734, "__label__home_hobbies": 7.677078247070312e-05, "__label__industrial": 0.0003330707550048828, "__label__literature": 0.0001227855682373047, "__label__politics": 0.00016617774963378906, "__label__religion": 0.00027823448181152344, "__label__science_tech": 0.0158233642578125, "__label__social_life": 6.16908073425293e-05, "__label__software": 0.01019287109375, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00017440319061279297, "__label__transportation": 0.0003132820129394531, "__label__travel": 0.00015211105346679688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36190, 0.01115]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36190, 0.74441]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36190, 0.89152]], "google_gemma-3-12b-it_contains_pii": [[0, 1073, false], [1073, 2978, null], [2978, 4252, null], [4252, 5644, null], [5644, 7122, null], [7122, 8154, null], [8154, 9452, null], [9452, 10703, null], [10703, 12467, null], [12467, 13195, null], [13195, 14526, null], [14526, 15518, null], [15518, 17204, null], [17204, 17477, null], [17477, 19540, null], [19540, 21012, null], [21012, 23744, null], [23744, 23975, null], [23975, 26546, null], [26546, 27288, null], [27288, 29386, null], [29386, 31088, null], [31088, 33060, null], [33060, 34368, null], [34368, 35516, null], [35516, 36190, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1073, true], [1073, 2978, null], [2978, 4252, null], [4252, 5644, null], [5644, 7122, null], [7122, 8154, null], [8154, 9452, null], [9452, 10703, null], [10703, 12467, null], [12467, 13195, null], [13195, 14526, null], [14526, 15518, null], [15518, 17204, null], [17204, 17477, null], [17477, 19540, null], [19540, 21012, null], [21012, 23744, null], [23744, 23975, null], [23975, 26546, null], [26546, 27288, null], [27288, 29386, null], [29386, 31088, null], [31088, 33060, null], [33060, 34368, null], [34368, 35516, null], [35516, 36190, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36190, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36190, null]], "pdf_page_numbers": [[0, 1073, 1], [1073, 2978, 2], [2978, 4252, 3], [4252, 5644, 4], [5644, 7122, 5], [7122, 8154, 6], [8154, 9452, 7], [9452, 10703, 8], [10703, 12467, 9], [12467, 13195, 10], [13195, 14526, 11], [14526, 15518, 12], [15518, 17204, 13], [17204, 17477, 14], [17477, 19540, 15], [19540, 21012, 16], [21012, 23744, 17], [23744, 23975, 18], [23975, 26546, 19], [26546, 27288, 20], [27288, 29386, 21], [29386, 31088, 22], [31088, 33060, 23], [33060, 34368, 24], [34368, 35516, 25], [35516, 36190, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36190, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
5a6e74f1570f1a77cdd704d6ce8f48edb269cdc3
Virtually Eliminating Router Bugs Eric Keller∗ Minlan Yu∗ Matthew Caesar† Jennifer Rexford∗ ∗ Princeton University, Princeton, NJ, USA † UIUC, Urbana, IL, USA ekeller@princeton.edu {minlanyu, jrex}@cs.princeton.edu caesar@cs.uiuc.edu ABSTRACT Software bugs in routers lead to network outages, security vulnerabilities, and other unexpected behavior. Rather than simply crashing the router, bugs can violate protocol semantics, rendering traditional failure detection and recovery techniques ineffective. Handling router bugs is an increasingly important problem as new applications demand higher availability, and networks become better at dealing with traditional failures. In this paper, we tailor software and data diversity (SDD) to the unique properties of routing protocols, so as to avoid buggy behavior at run time. Our bug-tolerant router executes multiple diverse instances of routing software, and uses voting to determine the output to publish to the forwarding table, or to advertise to neighbors. We design and implement a router hypervisor that makes this parallelism transparent to other routers, handles fault detection and booting of new router instances, and performs voting in the presence of routing-protocol dynamics, without needing to modify software of the diverse instances. Experiments with BGP message traces and open-source software running on our Linux-based router hypervisor demonstrate that our solution scales to large networks and efficiently masks buggy behavior. Categories and Subject Descriptors C.2.6 [Computer-Communication Networks]: Internetworking—Routers; C.4 [Performance of Systems]: Fault tolerance, Reliability, availability and serviceability General Terms Design, Reliability Keywords Routers, Bugs, Reliability, BGP Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. 1. INTRODUCTION The Internet is an extremely large and complicated distributed system. Selecting routes involves computations across millions of routers spread over vast distances, multiple routing protocols, and highly customizable routing policies. Most of the complexity in Internet routing exists in protocols implemented as software running on routers. These routers typically run an operating system, and a collection of protocol daemons which implement the various tasks associated with protocol operation. Like any complex software, routing software is prone to implementation errors, or bugs. 1.1 Challenges in dealing with router bugs The fact that bugs can produce incorrect and unpredictable behavior, coupled with the mission-critical nature of Internet routers, can produce disastrous results. This can be seen from the recent spate of high-profile vulnerabilities, outages, and huge spikes in global routing instability [40, 39, 16, 22, 21, 13, 31]. Making matters worse, ISPs often run the same protocols and use equipment from the same vendor network-wide, increasing the probability that a bug causes simultaneous failures or a network-wide crash. While automated systems can prevent misconfigurations from occurring [23, 24], these techniques do not work for router bugs, and in fact the state-of-the-art solution today for dealing with router bugs involves heavy manual labor—testing, debugging, and fixing code. Unfortunately operators must wait for vendors to implement and release a patch for the bug, or find an intermediate work around on their own, leaving their networks vulnerable in the meantime. Worse still, bugs are often discovered only after they cause serious outages. While there has been work on dealing with failures in networks [35, 33, 27], router bugs differ from traditional “fail-stop” failures (failures that cause the router to halt in some easily-detectable way) in that they violate the semantics of protocol operation. Hence a router can keep running, but behave incorrectly – by advertising incorrect information in routing updates, or by distributing the wrong forwarding-table entries to the data plane, which can trigger persistent loops, oscillations, packet loss, session failure, as well as new kinds of anomalies that can’t happen in correctly behaving protocols. This fact, coupled with the high complexity and distributed nature of Internet routing, makes router bugs notoriously difficult to detect, localize, and contain. As networks become better at dealing with traditional failures, and as systems that automate configuration become more widely deployed, we expect bugs to become a major roadblock in improving network availability. While we acknowledge the long-standing debate in the software engineering community on whether it is possible to completely prevent software errors, we believe unforeseen interactions across protocols, the potential to misinterpret RFCs, the increasing functionality of Internet routing, and the ossification of legacy code and protocols will make router bugs a “fact-of-life” for the foreseeable future and we proceed under that assumption. 1.2 The case for diverse replication in routers Unlike fail-stop failures, router bugs can cause Byzantine faults, i.e., they cause routers to not only behave incorrectly, but violate protocol specification. Hence, we are forced to take a somewhat heavy-handed approach in dealing with them (yet as we will find, one that appears to be necessary, and one that our results indicate is practical). In particular, our design uses a simple replication-based approach: instead of running one instance of routing software, our design uses a router hypervisor\(^1\) to run multiple virtual instances of routing software in parallel. The instances are made diverse to decrease the likelihood they all simultaneously fail due to a bug. We leverage data diversity (to manipulate the inputs to the router, for example by jittering arrival time of updates, or changing the layout of the executable in memory) and software diversity (given multiple implementations of routing protocols already exist, running several of them in parallel). We then rely on Byzantine-fault tolerant (BFT) techniques to select the “correct” route to send to the forwarding table (FIB), or advertise to a neighbor.\(^2\) The use of BFT combined with diverse replication (running multiple diverse instances) has proven to be a great success in the context of traditional software, for example in terms of building robust operating systems and runtime environments [18, 28, 36, 44, 12]. These techniques are widely used since heterogeneous replicas are unlikely to share the same set of bugs [18, 28, 44]. In this paper, we adapt diverse replication to build router software that is tolerant of bugs. A common objection of this approach is performance overhead, as running multiple replicas requires more processing capacity. However, BFT-based techniques provide a simple (and low-cost) way to leverage the increasingly parallel nature of multicore router processors to improve availability without requiring changes to router code. Network operators also commonly run separate hardware instances for resilience, across multiple network paths (e.g., multihoming), or multiple routers (e.g., VRRP [27]). Some vendors also protect against fail-stop failures by running a hot-standby redundant control plane either on multiple blades within a single router or even on a single processor with the use of virtual machines [19], in which case little or no additional router resources are required. Since router workloads have long periods with low load [9], redundant copies may be run during idle cycles. Recent breakthroughs vastly reduce computational overhead [45] and memory usage [26], by skipping redundancy across instances. 1.3 Designing a Bug-Tolerant Router In this paper, we describe how to eliminate router bugs “virtually” (with use of virtualization technologies). We design a bug-tolerant router (BTR), which masks buggy behavior, and avoids letting it affect correctness of the network layer, by applying software and data diversity to routing. Doing so, however, presents new challenges that are not present in traditional software. For example, (i) wide-area routing protocols undergo a rich array of dynamics, and hence we develop BFT-based techniques that react quickly to buggy behavior without over-reacting to transient inconsistencies arising from routing convergence, and (ii) our design must interoperate with existing routers, and not require extra configuration efforts from operators, and hence we develop a router hypervisor that masks parallelism and churn (e.g., killing a faulty instance and bootstrapping a new instance). At the same time we leverage new opportunities made available by the nature of routing to build custom solutions and extend techniques previously developed for traditional software. For example, (i) routers are typically built in a modular fashion with well-defined interfaces, allowing us to adapt BFT with relatively low complexity, and implement it in the hypervisor with just a few hundred lines of code, (ii) using mechanisms that change transient behavior without changing steady-state outcomes are acceptable in routing, which we leverage to achieve diversity across instances, and (iii) routing has limited dependence on past history, as the effects of a bad FIB update or BGP message can be undone simply by overwriting the FIB or announcing a new route, which we leverage to speed reaction by selecting a route early, when only a subset of instances have responded, and updating the route as more instances finish computing. Moreover, router outputs are independent of the precise ordering and timing of updates, which simplifies recovery and bootstrapping new instances. The next section discusses how diversity can be achieved and how effective it is, followed by a description of our design (Section 3) and implementation (Section 4). We then give performance results in Section 5, consider possible deployment scenarios in Section 6, contrast with related work in Section 7, and conclude in Section 8. 2. SOFTWARE AND DATA DIVERSITY IN Routers The ability to achieve diverse instances is essential for our bug-tolerant router architecture. Additionally, for performance reasons, it is important that the number of instances that need to be run concurrently is minimal. Fortunately, the nature of routing and the current state of routing software lead to a situation where we are able to achieve enough diversity and that it is effective enough that only a small number of instances are needed (e.g., 3-5, as discussed below). In this section we discuss the various types of diversity mechanisms, in what deployment scenario they are likely to be used, and how effective they can be in avoiding bugs. Unfortunately, directly evaluating the benefits of diversity across large numbers of bugs is extremely challenging, as it requires substantial manual labor to reproduce bugs. Hence, to gain some rough insights, we studied the bug re- ports from the XORP and Quagga Bugzilla databases [8, 5], and taxonomized each into what type of diversity would likely avoid the bug and experimented with a small subset, some of which are described in Table 1.3 2.1 Diversity in the software environment Code base diversity: The most effective, and commonly thought of, type of diversity is where the routing software comes from different code bases. While often dismissed as being impractical because a company would never deploy multiple teams to develop the same software, we argue that diverse software bases are already available and that router vendors do not need to start from scratch and deploy multiple teams. First, consider that there are already several open-source router software packages available (e.g., XORP, Quagga, BIRD). Their availability has spawned the formation of a new type of router vendor based on building a router around open-source software [7, 8]. Additionally, the traditional (closed-source) vendors can make use of open-source software, something they have done in the past (e.g., Cisco IOS is based on BSD Unix), and hence may run existing open-source software as a “fallback” in case their main routing code crashes or begins behaving improperly. Router vendors that do not wish to use open-source software have other alternatives for code diversity, for example, router vendors commonly maintain code acquired from the purchase of other companies [38]. As a final possibility, consider that ISPs often deploy routers from multiple vendors. While it is possible to run our bug-tolerant router across physical instances, it is most practical to run in a single, virtualized, device. Even without access to the source code, this is still a possibility with the use of publicly available router emulators [1, 3]. This way, network operators can run commercial code along with our hypervisor directly on routers or server infrastructure without direct support from vendors. While intellectual property restrictions arising from their intense competition makes vendors reticent to share source code with one another, this also makes it likely that different code bases from different vendors are unlikely to share code (and hence unlikely to share bugs). We base our claim that this is the most effective approach partially from previous results which found that software implementations written by different programmers are unlikely to share the vast majority of implementation errors in code [30]. This result can be clearly seen in two popular open-source router software packages: Quagga and XORP differ in terms of update processing (timer-driven vs. event-driven), programming language (C vs. C++), and configuration language, leading to different sorts of bugs, which are triggered on differing inputs. As such, code-base diversity is very effective and requires only three instances to be run concurrently. However, effectively evaluating this is challenging, as bug reports typically do not contain information about whether inputs triggering the bug would cause other code bases to fail. Hence we only performed a simple sanity-check: we selected 9 bugs from the XORP Bugzilla database, determined ### 2.2 Execution environment diversity Data diversity through manipulation of the execution environment has been shown to automatically recover from a wide variety of faults [12]. In addition, routing software specific-techniques exist, two of which are discussed below. As closed-source vendors do not get the full benefit from running from multiple code bases, they will need to rely on data diversity, most likely as a complement to version diversity. In that case, around five instances will be needed depending on the amount of difference between the different versions. This comes from the result of our study which showed version diversity to be 75% effective, so we assume that two versions will be run, each with two or three instances of that version (each diversified in terms of execution environment, which as we discuss below can be fairly effective). **Update timing diversity:** Router code is heavily concurrent, with multiple threads of execution and multiple processes on a single router, as well as multiple routers simultaneously running, and hence it is not surprising that this creates the potential for concurrency problems. Luckily, we 3To compare with closed-source software, we also studied publicly available Cisco IOS bug reports, though since we do not have access to IOS source code we did not run our system on them. can take advantage of the asynchronous nature of the routing system to increase diversity, for example, by introducing delays to alter the timing/ordering of routing updates received at different instances without affecting the correctness of the router (preserving any ordering required by the dependencies created by the protocol, e.g., announcements for the same prefix from a given peer router must be kept in order, but announcements from different peer routers can be processed in any order). We were able to avoid two of the example bugs described in Table 1 with a simple tool to introduce a randomized short delay (1-10ms) when delivering messages to the given instance. Further, by manually examining the bug databases, we found that approximately 30\% of bugs could be avoided by manipulating the timing/ordering of routing updates. **Connection diversity:** Many bugs are triggered by changes to the router’s network interfaces and routing sessions with neighbors. From this, we can see that another source of diversity involves manipulating the timing/ordering of events that occur from changes in the state or properties of the links/interfaces or routing session. As our architecture (discussed in Section 3) introduces a layer between the router software and the sessions to the peer routers, we can modify the timing and ordering of connection arrivals or status changes in network interfaces. For the two example bugs in Table 1, we found they could be avoided by simple forms of connection diversity, by randomly delaying and restarting connections for certain instances. By manually examining the bug database, we found that approximately 12\% of bugs could be avoided with this type of diversity. ### 2.3 Protocol diversity As network operators have the power to perform configuration modifications, something the router vendors have limited ability to do, there are additional forms of diversity that they can make use of. Here, we discuss one in particular. The process of routing can be accomplished by a variety of different techniques, leading to multiple different routing protocols and algorithms, including IS-IS, OSPF, RIP, etc. While these implementations differ in terms of the precise mechanisms they use to compute routes, they all perform a functionally-equivalent procedure of determining a FIB that can be used to forward packets along a shortest path to a destination. Hence router vendors may run multiple different routing protocols in parallel, voting on their outputs as they reach the FIB. To get some rough sense of this approach, we manually checked bugs in the Quagga and XORP Bugzilla databases to determine the fraction that resided in code that was shared between protocols (e.g., the zebra daemon in Quagga), or code that was protocol independent. From our analysis, we estimate that at least 60\% of bugs could be avoided by switching to a different protocol. ### 3. BUG TOLERANT ROUTER (BTR) Our design works by running multiple diverse router instances in parallel. To do this, we need some way of allowing multiple router software instances to simultaneously execute on the same router hardware. This problem has been widely studied in the context of operating systems, through the use of virtual machine (VM) technologies, which provide isolation and arbitrate sharing of the underlying physical machine resources. However, our design must deal with two new key challenges: (i) replication should be transparent and hidden from network operators and neighboring routers (Section 3.1), and (ii) reaching consensus must handle the transient behavior of routing protocols, yet must happen quickly enough to avoid slowing reaction to failures (Section 3.2). #### 3.1 Making replication transparent First, our design should hide replication from neighboring routers. This is necessary to ensure deployability (to maintain sessions with legacy routers), efficiency (to avoid requiring multiple sessions and streams of updates between peers), and ease of maintenance (to avoid the need for operators to perform additional configuration work). To achieve this, our design consists of a router hypervisor, as shown in Figure 1. The router hypervisor performs four key functions: ![Diagram showing the architecture of a bug-tolerant router.] **Figure 1: Architecture of a bug-tolerant router.** **Sharing network state amongst replicas:** Traditional routing software receives routing updates from neighbors, and uses information contained within those updates to select and compute paths to destinations. In our design, multiple instances of router software run in parallel, and somehow all these multiple router instances need to learn about routes advertised by neighbors. To compute routes, each internal instance needs to be aware of routing information received on peering sessions. However, this must happen without having instances directly maintain sessions with neighboring routers. To achieve this, we use a replicator component, which acts as a replica coordinator to send a copy of all received data on the session to each router instance within the system. Note that there may be multiple sessions with a given peer router (e.g., in the case of protocol diversity), in which case the replicator sends received data to the appropriate subset of instances (e.g., those running the same protocol). The replicator does not need to parse update messages, as it simply forwards all data it receives at the transport layer to each instance. **Advertising a single route per prefix:** To protect against buggy results, which may allow the router to keep running but may cause it to output an incorrect route, we should select the majority result when deciding what information to publish to the FIB, or to advertise to neighbors. To do this, we run a voter module that monitors advertisements from the router instances, and determines the route the router should use (e.g., the majority result). Our design contains two instances of the voter: an update voter that determines which routing updates should be sent to neighbors, and a FIB voter that determines which updates should be sent to the router’s FIB (forwarding table). As with the replicator, the update voter may vote among a subset of instances, for example, those belonging to the same protocol. The FIB voter will vote among all instances, as all instances must come to the same decisions with regard to the FIB. To ensure advertisements are consistent with FIB contents, the update voter and FIB voter must select the same routes. To handle this, the same voting algorithm must be used on both updates and FIB changes. To avoid introducing bugs, the voter should be as simple as possible (our voter implementation, containing multiple alternative voting strategies, is 514 lines of code). We assume the voter is trusted (since it is much simpler than router code, we expect it to have significantly fewer bugs and therefore the fact that it is a single point-of-failure is only a slight concern), and that replication is asynchronous (we do not assume all instances respond equally fast, as instances may be slow or mute due to bugs), and transparent (external routers do not interact directly with the multiple instances, so as to simplify deployment). **Maintaining a set of running replicas:** BFT-based techniques rely on having a sufficient number of correctly-behaving replicas in order to achieve consensus. Hence, if an instance crashes or begins producing buggy output, we may wish to replace it with a new copy. To achieve this, our hypervisor is responsible for bootstrapping the new instance when it begins running. For traditional routers, bootstrapping involves establishing a session with a neighboring router, which causes the neighboring router to send out update messages for each of the prefixes it has an entry for in its RIB. To avoid introducing externally visible churn, the hypervisor keeps a history of the last update peers have sent for each prefix, and replays this for any new instance upon startup of that instance. **Presenting a common configuration interface:** As there is no standardization of the configuration interface in routers, each router has ended up with its own interface. In the case where instances from different code bases are used, to keep the network operator from needing to configure each instance separately, a mechanism is needed to hide the differences in each configuration interface. Fortunately, this is not unlike today’s situation where ISPs use routers from multiple vendors. To cope with this, ISPs often run configuration management tools which automate the process of targeting each interface with a common one. As such, we can rely on these same techniques to hide the configuration differences. ### 3.2 Dealing with the transient and real-time nature of routers The voter’s job is to arbitrate amongst the “outputs” (modifications to the FIB, outbound updates sent to neighbors) of individual router instances. This is more complex than simply selecting the majority result – during convergence, the different instances may temporarily have different outputs without violating correctness. At the same time, routers must react quickly enough to avoid slowing convergence. Here, we investigate several alternative voting strategies to address this problem, along with their tradeoffs. **Handling transience with wait-for-consensus:** The extreme size of the Internet, coupled with the fact that routing events are propagated globally and individual events trigger multiple routing updates, results in very high update rates at routers. With the use of replication, this problem is potentially worsened, as different instances may respond at different times, and during convergence they may temporarily (and legitimately) produce different outputs. To deal with this, we use wait-for-consensus voting, in which the voter waits for all instances to compute their results before determining the majority vote. Because all non-buggy routers output the same correct result in steady-state, this approach can guarantee that if $k$ or fewer instances are faulty with at least $2k+1$ instances running, no buggy result will reach the FIB or be propagated to a peer. Note that in practice, waiting for consensus may also re- --- 4Since voting also reveals the set of misbehaving instances, our approach also simplifies diagnosis, as the hypervisor can explicitly report the set of buggy outputs it observes. ducible instability, as it has an effect similar to the MRAI (Minimum Route Advertisement Interval) timer (routers with MRAI send updates to their neighbors only when a timer expires, which eliminate multiple updates to a prefix that occur between timer expiries). Namely, forcing the voter to wait for all instances to agree eliminates the need to advertise changes that happen multiple times while it is waiting (e.g., in the presence of unstable prefixes). However, the downside of this is that reaction to events may be slowed in some cases, as the voter must wait for the $k+1$th slowest instance to finish computing the result before making a decision. **Speeding reaction time with master/slave:** Routers must react quickly to failures (including non-buggy events) to ensure fast convergence and avoid outages. At the same time, the effects of a bad FIB update or BGP message can be undone simply by overwriting the FIB or announcing a new route. To speed reaction time, we hence consider an approach where we allow outputs to temporarily be faulty. Here, we mark one instance as the master, and the other instances as slaves. The voter operates by always outputting the master’s result. The slaves’ results are used to cross-check against the master after the update is sent or during idle cycles. The benefit of this approach is that it speeds convergence to the running time of the master’s computation. In addition, convergence is no worse than the convergence of the master, and hence at most one routing update is sent for each received update. However, the downside of this approach is that if the master becomes buggy, we may temporarily output an incorrect route. To address this, when failing over to a slave, the voter readvertises any differences between the slaves’ routing tables and the routing table computed by the master. Hence, temporarily outputting an incorrect route may not be a problem, as it only leads to a transient problem that is fixed when the slaves overthrow the master. Finally, we consider a hybrid scheme which we refer to as continuous-majority. This approach is similar to wait-for-consensus in that the majority result is selected to be used for advertisement or for population into the FIB. However, it is also similar to master/slave in that it does not wait for all instances to compute results before selecting the result. Instead, every time an instance sends an update, the voter reruns its voting procedure, and updates are only sent when the majority result changes. The benefit of this approach is that it may speed reaction to failure, and the majority result may be reached before the slowest instance finishes computing. The downside of this approach is that convergence may be worsened, as the majority result may change several times for a single advertised update. Another downside of this approach is that voting needs to be performed more often, though, as we show in our experiments (Section 5) this overhead is negligible under typical workloads. ### 4. ROUTER HYPERVISOR PROTOTYPE Our implementation had three key design goals: (i) not requiring modifications to routing software, (ii) being able to automatically detect and recover from faults, and (iii) low complexity, to not be a source of new bugs. Most of our design is agnostic to the particular routing protocol being used. For locations where protocol-specific logic was needed, we were able to treat messages mostly as opaque strings. This section describes our implementation, which consists of a set of extensions built on top of Linux. Our implementation was tested with XORP versions 1.5 and 1.6, Quagga versions 0.98.6 and 0.99.10, and BIRD version 1.0.14. We focused our efforts on supporting BGP, due to its complexity and propensity for bugs. Section 4.1 describes how we provide a wrapper around the routing software, in order for unmodified routing software to be used, and Section 4.2 describes the various faults that can occur and how our prototype detects and recovers from them. #### 4.1 Wrapping the routing software To eliminate the need to modify existing routing software, our hypervisor acts as a wrapper to hide from the routing software the fact that it is a part of a bug-tolerant router, and allows the routing instances to share resources such as ports, and access to the FIB. Our design (Figure 2) takes advantage of the fact that sockets are used for communicating with peer routers, and for communicating forwarding table (FIB) updates to the kernel. Hence, our implementation intercepts socket calls from the router instances using the LD_PRELOAD environment variable and uses a modified libc library, called hv-libc, to redirect messages to a user-space module, called virtd, which manages all communication. ![Figure 2: Implementation architecture.](image) The two key functions the hypervisor then needs to manage are discussed below: **Socket-based communications:** To connect to peer routers (with TCP) and for writing to the common FIB (with Netlink), the multiple routers need to share access to a common identifier space (e.g., port 179 in BGP). We handle this by intercepting socket system calls in hv-libc, performing address translation in hv-libc, and using virtd as a proxy (e.g., when a router instance listens on port 179, instead they are made to listen on a random port and virtd will listen on 179 and connect to each of the random ports when receiving an incoming connection). **Bootstrapping new connections:** When the BTR initially starts up, the routing instances start with empty routing tables. In BGP, a session with a peer is established by creating a TCP connection, exchanging OPEN messages, and acknowledging the OPEN message with a KEEPALIVE message. After the session is established, the peers exchange routing information. However, when replacing a failed instance, we need to bootstrap it locally, to prevent the failure from being externally visible (e.g., sending a route-refresh to a peer). Additionally, we need to bootstrap it independently, to prevent the new instance starting in a faulty state (e.g., bootstrapping off another router instance). Since a router’s state only depends on the last received RIB advertised by its neighbors, we add some additional logic to the hypervisor. to store the last-received update for each (prefix, neighbor) pair. Then when a new instance is started, the hypervisor replays its stored updates. To lower complexity, the hypervisor treats the (prefix, neighbor) fields and other attributes in the packets as opaque strings, and does not implement protocol logic such as route selection. 4.2 Detecting and recovering from faults To deal with bugs, our hypervisor must detect which outputs are buggy (e.g., with voting), and recover from the buggy output (by advertising the voting result, and if necessary, restarting the buggy instance). Detection: One of our main goals is that the BTR should be able to automatically detect and recover from bugs affecting correctness of the router’s control or data planes. Since our design fundamentally relies on detecting differences in outputs of different instances, we need to handle every possible way their outputs could differ. All faults can be generalized to four categories: (i) an instance sending a message when it should not, (ii) an instance not sending a message when it should, (iii) an instance sending a message with incorrect contents, and (iv) bugs that cause a detectable faulty system event, such as process crashing or socket error. The first three categories are detected by using voting (the fourth category is easily detectable, so no further discussion is given). If an instance has a different output from the majority, we consider it a fault. For example, in case (i) above, the winning update will be the NULL update, in cases (ii) and (iii) the winning update will be the most-commonly advertised one. To avoid reacting to transient changes, voting is only performed across steady-state instance outputs, which have been stable for a threshold period of time. We then mark instances whose steady-state outputs differ from those of the majority or those that are not yet stable as being faulty (including in schemes like master/slave, which perform this step after advertising). Recovery: In the common case, recovering from a buggy router simply involves using the output from the voting procedure. However, to deal with cases where the router is persistently buggy, or crashes, we need some way to kill and restart the router. As a heuristic, we modified our hypervisor with a fault threshold timeout. If an instance continues to produce buggy output for longer than the threshold, or if the router undergoes a faulty system event, the router is killed. To maintain a quorum of instances on which voting can be performed, the BTR can restart the failed instance, or replace it with an alternate diverse copy. In addition, to support the master/slave voting scheme, we need some way to overwrite previously-advertised buggy updates. To deal with this, our implementation maintains a history of previously-advertised updates when running this voting scheme. When the hypervisor switches to a new master, all updates in that history that differ from the currently advertised routes are sent out immediately. 4.3 Reducing complexity It is worth discussing here the role the hypervisor plays in the overall reliability of the system. As we are adding software, this can increase the possibility of bugs in the overall system. In particular, our goals for the design are that (i) the design is simple, implementing only a minimal set of functionality, reducing the set of components that may contain bugs, and (ii) the design is small, opening the possibility of formal verification of the hypervisor—a more realistic task than verifying an entire routing software implementation. To achieve these goals, our design only requires the hypervisor to perform two functions: (i) acting as a TCP proxy, and (ii) bootstrapping new instances. Below, we described how these functions are performed with low complexity. Acting as a TCP proxy: To act as a TCP proxy simply involves accepting connections from one end point (remote or local) and connecting to the other. When there is a TCP connection already, the hypervisor simply needs to accept the connection. Then, upon any exchange of messages (in or out) the hypervisor simply passes data from one port to another. In addition, our design uses voting to make replication transparent to neighboring routers. Here, the update messages are voted upon before being sent to the adjacent router. However, this is simply comparing opaque strings (the attributes) and does not involve understanding the values in the strings. Overall, our implementation included multiple algorithms and still was only 514 lines of code. These code changes occur only in the hypervisor, reducing potential for new bugs by increasing modularity and reducing need to understand and work with existing router code. From this, we can see that the hypervisor design is simple in terms of functionality and much of the functionality is not in the critical section of code that will act as a single point of failure. Bootstrapping new instances: To bootstrap new instances requires maintaining some additional state. However, bugs in this part of the code only affect the ability to bootstrap new instances, and do not affect the “critical path” of voting code. One can think of this code as a parallel routing instance which is used to initialize the state of a new instance. Of course, if this instance’s RIB is faulty, the new instance will be started in an incorrect state. However, this faulty state would either be automatically corrected (e.g., if the adjacent router sends a new route update that overwrites the local faulty copy) or it would be determined to be faulty (e.g., when the faulty route is advertised), in which case a new instances is started. Additionally, the RIB that needs to be kept is simply a history of messages received from the adjacent router and therefore is simple. Bootstrapping a new instance also requires intercepting BGP session establishment. Here, the hypervisor simply needs to observe the first instance starting a session (an OPEN message followed by a KEEPALIVE) and subsequent instances simply get the two received messages replayed. 5. Evaluation We evaluate the three key assumptions in our work: It is possible to perform voting in the presence of dynamic churn (Section 5.1): Voting is simple to do on fixed inputs, but Internet routes are transient by nature. To distinguish between instances that are still converging to the correct output from those that are sending buggy outputs, our system delays voting until routes become stable, introducing a tradeoff between false positives (incorrectly believing an unstable route is buggy) and detection time (during which time a buggy route may be used). Since these factors are independent of the precise nature of bugs but depend on update dynamics, we inject synthetic faults, and replay real BGP routing traces. It is possible for routers to handle the additional overhead of running multiple instances (Section 5.2): Internet routers face stringent performance requirements, and hence our design must have low processing overhead. We evaluate this by measuring the pass-through time for routing updates to reach the FIB or neighboring routers after traversing our system. To characterize performance under different operating conditions, we vary the routing update playback rate, the source of updates (edge vs. tier-1 ISP), and the number of peers. Running multiple router replicas does not substantially worsen convergence (Section 5.3): Routing dynamics are highly dependent on the particular sequence of steps taken to arrive at the correct route—choosing the wrong sequence can vastly increase processing time and control overhead. To ensure our design does not harm convergence, we simulate update propagation in a network of BTRs, and measure convergence time and overhead. For completeness, we also cross-validate these against our implementation. 5.1 Voting in the presence of churn To evaluate the ability to perform voting in the presence of routing churn, we replayed BGP routing updates collected from Route Views [6] against our implementation. In particular, we configure a BGP trace replacer to play back a 100 hour long trace starting on March 1st 2007 at 12:02am UTC. The replayer plays back multiple streams of updates, each from a single vantage point, and we collect information on the amount of time it takes the system to select a route. Since performance is dependent only on whether the bug is detected by voting or not, and independent of the particular characteristics of the bug being injected, here we use a simplified model of bugs (based on the model presented in Section 4.2), where bugs add/remove updates and change the next-hop attribute for a randomly-selected prefix, and have two parameters: (i) duration, or the length of time an instance’s output for a particular prefix is buggy, (ii) interarrival time, or the length of time between buggy outputs. As a starting point for our baseline experiments, we assume the length of time a bug affects a router, and their interarrival times, are similar to traditional failures, with duration of 600 seconds, and interarrival time of 1.2 million seconds [34]. 5.1.1 Comparison of voting strategies There is a very wide space of voting strategies that could be used in our system. To explore tradeoffs in this space, we investigated performance under a variety of alternative voting strategies and parameter settings. We focus on several metrics: the fault rate (the fraction of time the voter output a buggy route), waiting time (the amount of time the voter waits before outputting the correct route) and update overhead (the number of updates the voter output). Fault rate: We investigate the fault rate of the voting strategies by injecting synthetic faults and varying their properties. First, we varied the mean duration and interarrival times of synthetic faults (Figures 3 and 4). We found that for very high bug rates, wait-3 (waiting for $K = 3$ out of $R = 3$ copies to agree before selecting the majority result) outperformed master/slave. This happened because wait-3 is more robust to simultaneous bugs (than master/slave, as master/slave takes some short time to detect the fault, potentially outputting an incorrect route in the meantime). In addition, unless the bug rate is extremely high, continuous-majority performs nearly as well as wait-3, with similar robustness and update overhead. Overall, we found that recovery almost always took place within one second. Increasing the number of instances running in parallel ($R$) makes the router even more tolerant of faults, but incurs additional overheads. Also, wait-for-consensus and continuous-majority gain more from larger values of $R$ than the master/slave strategy. For example, when moving from $R = 3$ to $R = 4$ instances, the fault rate decreases from 0.088% to 0.003% with wait-for-consensus, while with master/slave the fault rate only decreases from 0.089% to 0.06%. However, there may be practical limits on the amount of diversity achievable (for example, if there is a limited number of diverse code instances, or a bound on the ability to randomize update timings). This leads to the question—if we have a fixed number of diverse instances, how many should be run, and how many should be kept as standbys (not running, but started up on demand)? We found that standby routers were less effective than increasing $R$, but only for small values of $R$, indicating that for large numbers of diverse instances, most instances could be set aside as standbys to decrease runtime overhead. For example, if $R = 3$, under the continuous-majority strategy we attain a fault rate of 0.02%. Increasing $R$ to 4 reduced the fault rate to 0.0006%, while instead using a standby router with $R = 3$ reduced the fault rate to 0.0008%. This happens because buggy outputs are detected quickly enough that failing over to a standby is nearly as effective as having it participate. in voting at every time step. Because of this, operators can achieve much of the benefits of a larger number of instances, even if these additional instances are run as lower-priority (e.g., only updated during idle periods) standbys. **Waiting time:** Different voting algorithms provide different tradeoffs between waiting time (time from when a new best-route arrives, to when it is output by the voter) and the fault rate. The master/slave strategy provides the smallest waiting time (0.02 sec on average), but incurs a higher fault rate (0.0006% on average), as incorrect routes are advertised for a short period whenever the master becomes buggy. Continuous-majority has longer wait times (0.035 sec on average), but lower fault rate (less than 0.00001% on average), as routes are not output until multiple instances converge to the same result. The wait-for-consensus strategy’s performance is a function of the parameter $K$—larger values of $K$ increase wait time but decreases fault rate. However, we found that increasing $K$ to moderate sizes incurred less delay than the pass-through time for a single instance, and hence setting $K = R$ offered a low fault rate with only minor increases in waiting time. **Update overhead:** Finally, we compare the voting strategies in terms of their effect on update overhead (number of routing updates they generate), and compare them against a standard router (std. router). Intuitively, running multiple voters within a router might seem to increase update overhead, as the voter may change its result multiple times for a single routing update. However, in practice, we find no substantial increase, as shown in Figure 5, which plots a CDF of the number of updates (measured over one second intervals). For the master/slave strategy this is expected, since a single master almost always drives computation. In wait-for-consensus, no updates are generated until all instances arrive at an answer, and hence no more than one outbound update is generated per inbound update, as in a standard router. Interestingly, the continuous-majority strategy also does not significantly increase update overhead. This happens because when an update enters the system, the voter’s output will only change when the majority result changes, which can only happen once per update. **5.1.2 Performance of fault detection** Protocols today often incorporate thresholds (such as BGP’s MRAI timer) to rate-limit updates. To evaluate the level of protection our scheme provides against unstable instances, as well as the ability to distinguish steady-state from transient behavior, we incorporated a configurable timeout parameter ($T$) in fault detection to identify when a route becomes stable. Figure 6 shows the tradeoff as this parameter varies between the false negative rate (the number of times a non-buggy instance is treated as buggy), and the fault rate (i.e., the false positive rate of the voter, or the fraction of time a buggy route is treated as non-buggy). We found that as $T$ increases, the false negative rate decreases, as larger values of $T$ reduce the probability that transient changes will be considered when voting. The false negative rate does not vary among different voting strategies, as fault detection is only performed on steady-state outputs, and the algorithmic differences between the strategies disappear when performed on outputs that are not dynamically changing. The fault rate increases with $T$, as when a bug does occur, it takes longer to detect it. Interestingly, the fault rate initially decreases with $T$; this happens because for low values of $T$, more instances are treated as buggy, giving fewer inputs to the voter and increasing the probability of an incorrect decision. Overall, we found that it was possible to tune $T$ to simultaneously achieve a low fault rate, low false negative, and low detection time. **5.2 Processing overhead** We evaluate the overhead of running multiple instances using our hypervisor with both XORP- and Quagga-based instances running on single-core 3 Ghz Intel Xeon machines with 2 GB RAM. We measure the update pass-through time as the amount of time from when the BGP replayer sends a routing update to when a resulting routing update is received at the monitor. However, some updates may not trigger routing updates to be sent to neighbors, if the router decides to continue using the same route. To deal with this case, we instrument the software router’s source code to determine the point in time when it decides to retain the same route. We also instrument the kernel to measure the FIB pass-through time, as the amount of time from when the BGP replayer sends an update to the time the new route is reflected in the router’s FIB (which is stored as the routing table in the Linux kernel). Figure 7 shows the pass-through time required for a routing change to reach the FIB. We replayed a Routeviews update trace and varied the number of Quagga instances from 1 to 31, running atop our router hypervisor on a single-core machine. We found the router hypervisor increases FIB pass-through time by 0.08% on average, to 0.06 seconds. Our router hypervisor implementation runs in user space, instead of directly in the kernel, and with a kernel-based implementation this overhead would be further reduced. Increasing the number of instances to 3 incurred an additional 1.7% increase, and to 5 incurred a 4.6% increase. This happens because the multiple instances contend for CPU resources (we found that with multicore CPUs this overhead was substantially lower under heavy loads). evaluate performance under heavier loads, we increased the rate at which the replayer played back routing updates by a factor of 3000x. Under this heavy load, FIB pass-through times slow for both the standard router and BTR due to increased queuing delays. However, even under these heavy loads, the BTR incurs a delay penalty of less than 23%. To estimate effects on convergence, we also measured the update pass-through time as the time required for a received routing change to be sent to neighboring routers. We found this time to be nearly identical to the FIB pass-through time when the MRAI timer was disabled. As updates are sent immediately after updating the FIB. When MRAI was enabled (even when set to 1 second, the lowest possible setting for Quagga), the variation in delay across instances was dwarfed by delay incurred by MRAI. Finally, we found that switching to the master/slave voting strategy reduces pass-through delay, though it slightly increases the fault rate, as discussed previously in Section 5.1. 5.3 Effect on convergence Next, we study the effect of our design on network-wide convergence. We do this by simulating a network of BTRs (each with eight virtual router instances) across three network-level graphs: the entire AS-level topology (labeled AS in Figure 8) sampled on Jan 20 2008, AS 3967’s internal network topology as collected from Rocketfuel (labeled 3967), and cliques (labeled CQ) of varying sizes (since a clique contains the “worst case” for routing, allowing potential to explore all n! possible paths in a clique of size n). To determine ordering of when BTRs respond, we run our implementation over routing updates, record pass-through times, and replay them within our simulation framework. Since for the master/slave approach there is no effect on network operation unless a bug is triggered (since the slaves only operate as standbys), we focus our evaluation on the other strategies. We found several key results. First, as shown in Figure 8, the voting schemes do not produce any significant change in convergence beyond the delay penalty described in previous sections, as compared to a network only containing standard routers. We found this delay penalty to be much smaller than propagation delays across the network, and to be reduced further when MRAI is activated. As the number of instances increases (up to the number of processor cores), continuous-majority’s delay decreases, because it becomes increasingly likely that one will finish early. The opposite is true for wait-for-consensus, as the delay of the slowest instances becomes increasingly large. Next, while we have thus far considered a virtual router level deployment, where voting is performed at each router, we also considered a virtual network deployment, where voting is performed at the edges of the network. In our experiments we ran eight virtual networks, and found that this speeds up convergence, as routers do not have to wait for multiple instances to complete processing before forwarding updates. Hence, for small numbers of diverse instances, voting per-router has smaller convergence delay. However, virtual-network approaches require substantially more control overhead than the virtual-router voting schemes. To address this, we found that simple compression schemes [11] that eliminate redundancy across updates could reduce the vast majority of this overhead. Finally, to validate our simulations, we set up small topologies on Emulab [2], injected routing events, and compared with simulations of the same topology. We found no statistically significant difference. 6. DISCUSSION For simplicity, this paper discusses the one particular design point. However, our architecture is amenable to deployment on varying levels of granularity: Server-based operation: Instead of running the diverse instances within a single router, their computations may be offloaded to a set of dedicated servers running in the network (e.g., an RCP-like platform [15]). These servers run the router software in virtualized environments, and cross-check the results of routers running within the network. When a buggy result is detected, virtual router instances may be migrated into the network to replace the buggy instance. Alternatively, the servers may be configured to operate in read-only mode, such that they may signal alarms to network operators, rather than participate directly in routing. Network-wide deployment: Instead of running instances of individual router software in parallel, ensembles of routers may collectively run entire virtual networks in parallel. Here, the outputs of a router are not merged into a single FIB, or as a single stream of updates sent to its neighbors. Instead, each router maintains a separate FIB for each virtual network, and voting is used at border routers to determine which virtual network data packets should be sent on. The advantage of this approach is it allows different routing protocols to be used within each virtual network, making it simpler to achieve diversity. For example, OSPF may be run in one network and IS-IS in another. In addition, convergence speed may be improved, as individual physical routers do not have to wait for their instances to reach a majority before sending a routing update. Process-level deployment: Our design runs multiple instances of routing software in parallel, and hence incurs some memory overhead. On many Internet routers this is not an issue, due to low DRAM costs, and the fact that DRAM capacity growth has far exceeded that of routing table growth. That said, if it is still desirable to decrease memory usage, router software may be modified to vote on a shared RIB instead of a FIB. We found the RIB is by far the largest source of memory usage in both Quagga and XORP, incurring 99.3% of total memory usage. Voting on a shared RIB would reduce this overhead by eliminating the need to store separate copies of the RIB across router instances. Here, voting could be performed across multiple routing daemons (e.g., multiple BGP processes within a single instance of Cisco IOS) to construct a single shared RIB. In addition to reducing memory usage, finer-grained diversity may speed reaction (by only cloning and restarting individual processes or threads), and finer-grained control (during times of load, only mission-critical components may be cloned to reduce resource usage). However, code development may become more challenging, since this approach relies on knowing which parts of code are functionally equivalent. To address this, router software could be written to a common API, to allow replication and composition of modules from different code bases while sharing state. **Leveraging existing redundancy:** Instead of running multiple instances in parallel, a router may be able to leverage redundant executions taking place at other routers in the network. For example, networks often provision redundant network equipment to protect against physical failures. For example, the VRRP [27] protocol allows multiple routers to act collectively as a single router. Our architecture is amenable to leveraging physical redundancy, as the multiple instances may be deployed across the redundant router instances. In addition, all routers in the ISP compute the same egress set of BGP routes that are “equal” according to the first few steps of the decision process that deal with BGP attributes [24, 15]. To leverage this redundancy, it may be possible to extend our architecture to support voting across multiple router’s egress sets. ### 7. RELATED WORK Software and data diversity has been widely applied in other areas of computing, including increasing server reliability [18], improving resilience to worm propagation [36], building survivable Internet services [28], making systems secure against vulnerabilities [20], building survivable overlay networks [44], building fault tolerant networked file systems [17], protecting private information [43], and recovering from memory errors [12]. Techniques have also been developed to minimize computational overhead by eliminating redundant executions and redundant memory usage across parallel instances [45, 26]. However as discussed in Section 1.3, routing software presents new challenges for SDD (e.g., routers must react quickly to network changes, have vast configuration spaces and execution paths, rely on distributed operations), as well as new opportunities to customize SDD (routers have small dependence on past history, can achieve the same objectives in different ways, have well-defined interfaces). We address these challenges and opportunities in our design. There has also been work studying router bugs and their effects [42, 32], and our design is inspired by these measurement studies. Also, [14] used a graph-theoretic treatment to study the potential benefits of diversity across physical routers (as opposed to diversity within a router). As work dealing with misconfigurations [23, 24] and traditional fail-stop failures [10, 35, 33, 27] becomes deployed we envision router bugs will make up an increasingly significant roadblock in improving network availability. Our work can be contrasted to techniques which attempt to prevent bugs by formally verifying the code. These techniques are typically limited to small codebases, and often require manual efforts to create models of program behavior. For example, with manual intervention, a small operating system kernel was formally verified [29]. For routing, work has been done on languages to model protocol behavior (e.g., [25]), however the focus of this work is on algorithmic behaviors of the protocol, as opposed to other possible places where a bug can be introduced. In contrast, our approach leverages a small and low-complexity hypervisor, which we envision being possible to formally verify. Our design leverages router virtualization to maintain multiple diverse instances. Router virtualization is an emerging trend gaining increased attention, as well as support in commercial routers. Our design builds on the high-level ideas outlined in [16] by providing a complete design, several algorithms for detecting and recovering from bugs, and an implementation and evaluation. In addition, our design is complementary to use of models of router behavior [23, 24] and control-plane consistency checks [41, 37], as these models/checks can be run in place of one or more of the router virtual instances. Finally, systems such as MARE (Multiple Almost-Redundant Executions) [45] and the Difference Engine [26] focus on reducing overheads from replication. MARE runs a single instruction stream most of the time, and only runs redundant instruction streams when necessary. The Difference Engine attains substantial savings in memory usage across VMs, through use of sub-page level sharing and in-core memory compression. These techniques may be used to further reduce overheads of our design. ### 8. CONCLUSIONS Implementation errors in routing software harm availability, security, and correctness of network operation. In this paper, we described how to improve resilience of networks to bugs by applying Software and Data Diversity (SDD) techniques to router design. Although these techniques have been widely used in other areas of computing, applying them to routing introduces new challenges and opportunities, which we address in our design. This paper takes an important first step towards addressing these problems by demonstrating diverse replication is both viable and effective in building robust Internet routers. An implementation of our design shows improved robustness to router bugs with some tolerable additional delay. ### 9. REFERENCES [1] Cisco 7200 simulator. (software to run Cisco IOS images on desktop PCs) [www.ipflow.utc.fr/index.php/Cisco_7200_Simulator].
{"Source-Url": "http://conferences.sigcomm.org/co-next/2009/papers/Keller.pdf", "len_cl100k_base": 12286, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41009, "total-output-tokens": 15049, "length": "2e13", "weborganizer": {"__label__adult": 0.00035452842712402344, "__label__art_design": 0.0004322528839111328, "__label__crime_law": 0.0003693103790283203, "__label__education_jobs": 0.0014696121215820312, "__label__entertainment": 0.00019037723541259768, "__label__fashion_beauty": 0.00017070770263671875, "__label__finance_business": 0.0004076957702636719, "__label__food_dining": 0.0003685951232910156, "__label__games": 0.0010356903076171875, "__label__hardware": 0.005359649658203125, "__label__health": 0.0005440711975097656, "__label__history": 0.0005311965942382812, "__label__home_hobbies": 0.0001455545425415039, "__label__industrial": 0.0006260871887207031, "__label__literature": 0.0004274845123291016, "__label__politics": 0.0002117156982421875, "__label__religion": 0.0004649162292480469, "__label__science_tech": 0.3388671875, "__label__social_life": 0.0001181960105895996, "__label__software": 0.047210693359375, "__label__software_dev": 0.599609375, "__label__sports_fitness": 0.00029087066650390625, "__label__transportation": 0.0007338523864746094, "__label__travel": 0.00025010108947753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68107, 0.02434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68107, 0.51782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68107, 0.91627]], "google_gemma-3-12b-it_contains_pii": [[0, 4842, false], [4842, 11304, null], [11304, 15858, null], [15858, 19848, null], [19848, 26358, null], [26358, 32649, null], [32649, 39311, null], [39311, 44633, null], [44633, 50242, null], [50242, 55760, null], [55760, 62481, null], [62481, 68107, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4842, true], [4842, 11304, null], [11304, 15858, null], [15858, 19848, null], [19848, 26358, null], [26358, 32649, null], [32649, 39311, null], [39311, 44633, null], [44633, 50242, null], [50242, 55760, null], [55760, 62481, null], [62481, 68107, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68107, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68107, null]], "pdf_page_numbers": [[0, 4842, 1], [4842, 11304, 2], [11304, 15858, 3], [15858, 19848, 4], [19848, 26358, 5], [26358, 32649, 6], [32649, 39311, 7], [39311, 44633, 8], [44633, 50242, 9], [50242, 55760, 10], [55760, 62481, 11], [62481, 68107, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68107, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
68f7867bc01cb01dda1eceee179e530296264afc
On Line Markets for Distributed Object Services: the MAJIC system Lior Levy, Liad Blumrosen and Noam Nisan* Institute of Computer Science, The Hebrew University of Jerusalem, Jerusalem, Israel. Abstract We describe a general-purpose architecture for applying economic mechanisms for resource allocation in distributed systems. Such economic mechanisms are required in settings such as the Internet, where resources belong to different owners. Our architecture is built above standard distributed-object frameworks, and provides a “market” for arbitrary distributed object resources. We first describe the abstract elements and properties of an architecture that can be applied over essentially any distributed object-based platform. We then describe the MAJIC system that we have implemented over Sun's Jini platform. A key novel aspect of our system is that it handles multiple parameters in the allocation and in the specification of utilities and costs for each distributed service. We provide both theoretical and experimental results showing the following three key properties of this system: (1) Efficient resource allocation. (2) Motivation for resource owners to share them with others. (3) Load balancing. 1 Introduction 1.1 Motivation The following concept may be viewed as the holy grail of "Internet Computing": Every user connected to the Internet should have complete access to all resources available anywhere on the Internet. The user should be presented with an illusion of a single centrally organized "global computer". The main challenge of computer engineering, in this context, is to design the protocols, algorithms, paradigms, and systems that achieve this illusion by using the aggregation of the physical computers, communication links, and other resources that are available on the Internet. Ideally, such systems would optimally allocate all available resources across the Internet. There are indeed a wide variety of such resources: computational resources (such as CPU time, or file servers), information resources (Databases, video), communication resources (links, QoS), services (help-desk, access to specialized algorithms), hardware (printers, cameras), and more. A dream suite of protocols and algorithms for the Internet would allow all these resources, as well as others, to be optimally and transparently allocated across the Internet. There are clearly many aspects that need to be addressed on the way to this “holy grail”, and many of these aspects have received much attention in the literature. In this paper we concentrate on the interplay and synergy between two key paradigms that address different aspects of this challenge: the distributed objects paradigm for basic technological interoperability and the economic paradigm for motivating resource sharing by different users or organizations. 1.2 Distributed Objects Paradigm In recent years the paradigm of distributed object services is becoming the basic backbone of communication and cooperation between components of a dis- tributed system. In a distributed object framework, computers on a network encapsulate their shareable resources (services) in well defined procedural interfaces. Other computers then use these resources by performing remote procedure calls (RPC), or in an OOP terminology, remote method invocations (RMI) on them. In its pure object-oriented variant this paradigm is the basis of most modern commercial distributed platforms: CORBA [3], Microsoft’s DCOM [6], Java’s RMI [33]. Taking a wider perspective, web servers follow this paradigm on the level of web pages (static or dynamic), and with standards such as XML [49] and protocols such as SOAP [41], a true web-like infrastructure for distributed processing that follows this paradigm emerges. Indeed many authors have proposed variants and implementations of this vision under names such as “Web of Objects”, “Distributed Objects Everywhere”, etc [33, 48, 42, 13, 3]. 1.3 Economic Paradigms for Resources Sharing A major difficulty in achieving efficient sharing of resources across the Internet is the obvious fact that the different computers and resources belong to different organizations. An Internet-wide resource sharing system must provide motivation for the owners of resources to share them with others. Any such motivation leads to some kind of economic system, and in its simplest form involves payments for services. Such economic systems for distributed allocation of computing resources have been applied to CPU time [9, 47, 46, 45, 27, 34], communication [20, 40, 37, 19], and other resources [21, 15, 38, 45], and have received much theoretical interest lately [20, 25, 10, 32, 16, 39, 28, 29]. Such systems pursue two complementary goals: that participants are indeed motivated to share these resources with others and that the resources are indeed allocated well. 1.4 This Paper We first propose a general architecture for augmenting a distributed-object system with payments, and a market-based mechanism for allocating resources. We then describe a system of this sort, the MAJIC system, that we have implemented over Sun’s Jini infrastructure. More details are available in the MAJIC website [22]. This architecture allows, for the first time, applying the ideas of economic-based cooperation to the full spectrum of resources available on the Internet: CPU time, file servers, Databases, online entertainment, communication bandwidth, algorithms, printers and other hardware, etc. Moreover, this is done in a way that is easily interoperable with the current leading technologies. We provide both general arguments (and mathematical proofs) showing that such an augmented system would indeed function well, and specific experimental results from MAJIC, supporting these findings. This architecture provides, for the first time, an economy based general-purpose infrastructure for all kind of resources, compared with earlier similar systems, which were dedicated to one kind of resource. A key novel aspect of our architecture is that it allows multiple parameters in specifying utilities and costs for each service request and handles these parameters in an efficient and non-trivial way. It is clear that any system that achieves serious resource sharing over the Internet must address both the issue of technical inter-component communication and the issue of motivating selfish entities to share their resources. We believe that systems built along the principles laid here answer both issues in an integrated, inter-operable, and efficient way and could provide a general-purpose architecture that allows true efficient resource sharing on the Internet! 2 A blueprint for a market of distributed object services 2.1 Basic idea The starting point of our architecture is simple: each object that provides a service may attach a “price-tag” to it. When another object wishes to use a particular type of service, it calls a central “service market place” that functions as the object request broker. The market provides a “reverse auction” for this service, where all available objects (service providers) that provide a service of this type can participate. The provider that charges the lowest price wins, and gets to service the request for the agreed-upon price. A simple example that we will use throughout the paper is a printing service. Our starting point assumes that printers provide the service of printing a page by providing the simple remote method: printer.print(page). Each printer has a price for this service: printer.getPrice(). A computer that wants to print a page on a printer that does not belong to him gets the reference to the cheapest printer from the “printing market” and can then print on this printer: \texttt{market.getPrinter().print(page)}. Several basic issues need to be addressed before this can be made into a workable system, and we describe the major ones. 2.2 Parameterized Services Looking at the previous printing example, one is immediately concerned with the differences between different printers and printing jobs. In reality many parameters distinguish one print job from another: number of pages, page size, printer’s location, printing speed, print quality, etc. Clearly any serious system that handles printing services must be aware of these differences. More generally, distributed object services receive input parameters - it is quite clear that the price requested for a service must be tightly related to these parameters. Additionally, inter-changeable distributed service providers are still not totally equivalent in many of their parameters (such as their quality, virtual location or their speed). This is not a simple issue to tackle when one attempts to produce a “market” for these services. Ignoring the parameters in the market will simply make any type of efficiency in resource allocation impossible. Taking all the parameters into account in the definition of the “market” will lead to a logically separate market for each request and for each service provider, eliminating competition and thus any flexibility in allocations. The solution is clearly to work within a single market, but take parameters into account during resource assignment. In our system each service type specifies the set of parameters of the service, defining the parameter space of the service. Each service provider can supply the service in some specified subset of the parameter space. Back to our example, a certain printer may only be able to print on “A4” or “letter” page sizes, but not on “A3” size, while another may also offer “A3” size. Similarly, a printer will usually only supply the printing service at a certain physical location or with a certain time delay (depending on its current load). Each service request may be in a single point of the parameter space (“I need A4 pages”), or may be satisfied with a whole subset of the parameter space (“A4 or letter is fine”), possibly with preferences among the different possibilities. In economic terms, each point in the parameter space of a service type is a separate product type. The products that correspond to different points in the parameter space of a single service type are partial substitutes to each other - both for service providers and for service requesters. The challenge we face (and solve below) is to organize a single market for all these products, while handling the partial substitution in an economically efficient way. 2.3 Sellers’ Quote and Buyers’ Utility Our economic system is based on a common currency in which all participants can express their economic preferences. Thus we assume that each service provider - seller - has a certain internal cost for supplying the service at each point of the parameter space that can be provided by him. Similarly, each service requester - buyer - has a certain economic benefit from receiving the service, a benefit that may depend on the parameters. Denote by \( P \) the parameter space of a certain service type, our economic model of participants is given by the following two functions: (1) Sellers’ cost function: \( c_S : P \rightarrow R^+ \). For a point \( p \in P \), \( c_S(p) \) specifies seller \( S \)'s internal cost for supplying the service with parameters \( p \). I.e., he would like to provide this service with these parameters if he is paid more than \( c_S(p) \), and would not agree to provide the service for a lower price. We take the convention that if \( S \) cannot provide the service with parameters \( p \) at all then \( c_S(p) = \infty \). (2) Buyers’ utility function: \( u_B : P \rightarrow R^+ \). For a point \( p \in P \), \( u_B(p) \) specifies buyer \( B \)'s benefit from receiving the service with parameters \( p \). I.e., he would like to receive this service with these parameters if he pays less than \( u_B(p) \), and would not agree to buy the service for a higher price. We take the convention that if the service with parameters \( p \) is not acceptable at all to him, then \( u_B(p) = 0 \). In our system, each seller sends to the market a function corresponding to his cost function - called the quote function \( q_S : P \rightarrow R^+ \). For a point \( p \in P \), \( q_S(p) \) specifies his “quote” for the service with parameters \( p \) - i.e. the amount of money he demands for providing the service with these parameters. The quote function \( q_S \) is essentially a catalog specifying a price for each choice of parameters that can be supplied by \( S \). Informally speaking, the quote function \( q_S \) of a seller should correspond tightly to his cost function \( c_S \). This however cannot be guaranteed at this point as a seller will likely send to the market a quote function aimed for maximizing his profits - not aimed at any particular correspondence with his cost function. We will return to this point below. The system allows sellers to modify their quote functions occasionally, taking into account changes in status. When a buyer requests a service he sends to the market a representation of his utility function. The market matches this buyer with the seller that fits him best. The abstract process that takes place is that the market creates an agent for the buyer based on his utility function. This agent is presented with the catalogs (quote functions) of all sellers, and chooses the best seller and parameters. The details of this process are explained in section 2.4. The representation of the quote and utility functions as objects may be given by specifying the formulas that define them as a function of the different parameter values or may be given by opaque distributed objects that encapsulate these functions. The choice of representation will usually depend on the service type and has consequences in how the system can function. ### 2.4 The Market Mechanism As mentioned, the market holds the current quotes from all sellers \(\{q_S\}\), and when it receives a request from a buyer - specified by the buyer's utility function \(u_B\) - it attempts matching this request to the best seller and choosing the best parameter values. The optimization criteria is the surplus: \[ \text{surplus}_{B,S}(p) = u_B(p) - q_S \] For a given supplier \(S\), the parameters \(p\) are chosen as to maximize this surplus: \[ p^*_B,S = \arg\max_p \{\text{surplus}_{B,S}(p)\} \] In this case search over the parameter space is computationally feasible (this depends on the representations of the parameter space and the quote and utility functions), the buyers' agent can directly find these optimal parameters. Otherwise, the system allows each buyer to supply a “parameter search engine” that attempts finding these parameters, given a quote function (encapsulated by an object) as input. We expect this mechanism to be computationally efficient in many cases, either because of the simplicity of the parameter space, or because of the small number of parameters. However, when searching in complex parameter space, we expect the buyer's search engine to find a close approximation in reasonable time. The optimal supplier is chosen as to optimize the surplus under the optimal parameters: \[ S^* = \arg\max_S \{\text{surplus}_{B,S}(p^*_B,S)\} \] At this point the buyers' agent must check that this surplus is indeed positive: \(\text{surplus}_{B,S}(p^*_B,S) > 0\). Otherwise, the buyer is not willing to pay as much as the optimal seller is asking, and the service request should be canceled. As analyzed theoretically in section 4, and demonstrated experimentally in section 5, this agent-based allocation produces efficient allocations in terms of reported utilities and quotes. Once the optimal \(S^*\) and \(p^*\) are found, the market may in principle fix any payment \(d\) in the range \(q_S \cdot (p^*) \leq d \leq u_B(p^*)\). Any price in this range will be acceptable both to the buyer and to the seller. The simplest choice would be to use the quote function as the price: the buyer must pay the seller the amount of \(q_S \cdot (p^*)\). This is certainly the usual choice in commerce as it corresponds to the catalog price of the chosen product. In terms of auction theory, this corresponds to a first price auction [31, 24]. We suggest also a different choice of payment, generalizing Vickrey’s second price auction [44]. The motivation for this payment rule is to motivate sellers to send the market a quote that is equal to their cost function \(q_S = c_S\) - a property known as incentive compatibility. This is extremely important due to the fact that otherwise the assignment does not optimize the allocation according to the true costs but rather according to the quotes. Indeed the previously mentioned payment rule motivates sellers to announce a quote that is higher than their costs - thus potentially leading to a wrong choice of seller. For general background on this topic see [14, 4, 23], and for specific discussion in the context of computation resources see [28, 26, 36, 43]. The payment rule we suggest is as follows. Let \(S^2\) be the second best choice for provider: \[ S^2 = \arg\max_{S \neq S^*} \{\text{surplus}_{B,S}(p^*_B,S)\} \] The surplus with supplier \(S^2\) is less or equal to the surplus with supplier \(S^*\). This payment method mandates that the buyer only gets the surplus of \(S^2\), while the optimal seller gets the difference between the two surpluses. Thus the payment to supplier \(S^*\) is given by: \[ d = u_B(p^*_B,S^*) - \text{surplus}_{B,S^2}(p^*_B,S^2) \] The main theoretical result we show, using the standard game-theoretic models of rational behavior, is that this payment method results in incentive compatibility, and thus all rational sellers indeed quote their true costs leading to efficiency of allocations in the system. 2.5 Load Balancing as a By Product As described above, this type of system ensures optimal allocation of each request to the service provider that is best for it. In cases that requests do not conflict with each other, this implies that the system obtains optimal global performance. This is clearly not the usual case! The whole point of allocation in distributed systems is handling the conflicts - different requests should normally be split between the available servers. Going back to our printing example, not everyone can gain access to the best and cheapest printer - this would likely cause a bottleneck there. Indeed, perhaps the most basic requirement from a distributed system is that of load balance: the load should be reasonably split between available servers. A key observation is that economic-based systems can provide this load balancing - if designed correctly. Specifically, when one considers the underlying reason why load balancing is usually desired, it seems that the reason is simply that users want their requests to be served quickly. Putting this into economic terms, users' utilities depend on the time until their request is serviced. This is rarely formalized, but may be easily formalized if service-time is a parameter in the parameter space of the service type - as it is in our system. E.g., a request may specify a firm deadline by setting the utility to be zero if the service-time is greater than the required deadline. Similarly, gentler penalties for tardiness may be applied by tailoring the utility function's dependence on time. Suppliers, on the other hand, must make sure that their quotes do indeed reflect their current capabilities in terms of service-time and must modify them when their load changes. Load balancing emerges automatically once the quotes of the different suppliers do indeed reflect their actual service-time capabilities. As a certain service supplier gets more requests assigned to it, it must raise the service-time promised in his quotes. This will automatically cause time-sensitive requests to be allocated to other service suppliers - those with lower loads. Requests that are less sensitive to service-time and more sensitive to other parameters that are optimized by a loaded supplier may still be assigned to it. This form of load balancing strikes a balance between optimized matching of service parameters and reducing the service-time in a way that is locally perfect: exactly according to the specification of the service request. This local optimization does not necessarily imply global optimal balance of load: an assignment of a certain supplier to one request may result in a heavy penalty for the next request. Indeed, any formalization of optimal global allocation is computationally intractable (NP-complete) [11], and moreover, cannot be done in an online mode - servicing requests as they come [8]. Yet, we supply theoretical evidence as well as experimental results, suggesting that load balance emerges. 3 The MAJIC system: Multi-parameter Auctions for Jini Components The MAJIC system is built on top of Sun's Jini platform, while implementing the basic architecture described above. We chose the Jini platform since it is a simple yet powerful distributed object system with open source. Moreover, Jini's object-broker mechanism - the lookup service - turned out to be easily adapted to our purposes. In addition, Jini uses Java's code mobility capabilities, which in our case allows transfer of utility functions and quote functions encapsulated in objects. 3.1 Jini overview The Jini system is a distributed systems technology released by Sun Microsystems in 1999. The Jini technology enables all types of digital devices to work together in a community, without extensive planning or installation. It is built on top of the Java environment [17] and the RMI mechanism [35]. Detailed specifications and explanations regarding the Jini system can be found in [1, 7]; on-line documentation can be found in [18]. We now describe only the bare essentials that are directly required for our purposes. The Jini technology infrastructure provides mechanisms for devices, services, and users to join and detach spontaneously from a network, and be visible and available to those who want to use it. Each Jini system is built around one or more lookup services. A lookup service is a service that maintains a list of known services and provides the ability to search and find services via the lookup protocol. When a service is booted on to the network, it uses the discovery protocol to find the local lookup service. The service then registers its proxy object (a Java object) with the lookup service using the join protocol. When a client program queries the lookup service for a particular service (using the lookup protocol), the lookup service returns the appropriate service proxy (or a set of service proxies) to the client. Then, the client can invoke methods of the chosen service using the proxy. The invocation can be done either locally or remotely (using Java RMI protocol). Figure 1 illustrates the system normal flow. ![Image](image.png) Figure 1 - Jini protocols flow We use two other Jini concepts inside MAJIC; the first is the leasing mechanism ([7] ch. 10), which provides Jini its self-healing nature. This mechanism is a timed-based resource reservation: if a service fails or stops (either intentionally or unintentionally) without “cleaning up” after itself, its leases will eventually expire and the service will be deleted. The second concept is the service attribute set ([7] ch. 7), a flexible way for services to annotate their proxy objects with information describing the service; thus helping clients to find their required services. 3.2 MAJIC Architecture overview The MAJIC architecture is based on the general blueprint described above applied to Jini platform. The main considerations were to preserve interoperability and the programming paradigms of the original Jini platform, while providing maximal flexibility. In addition, the market mechanisms should have minimal effect on the performance of the system. 3.2.1 Service Types Each service type in our architecture is implemented as a Jini service that additionally implements the Economy Service Interface (ESI). Each service type has a well known set of parameters that defines the parameter space. The parameters are partitioned into seller parameters, those that are totally fixed by each seller, and buyer parameters which can be chosen by each buyer. Each service type defines a Service Contract class (a subclass of the abstract SC class described below) for sellers and a Buyer Valuation class (a subclass of the abstract BV class described below) for buyers. 3.2.2 The market The market is implemented as an extension (subclass) of Jini’s lookup service that uses economic mechanisms to perform efficient allocation. When service providers (sellers) join the market, they submit their quote wrapped in a Seller Contract (SC) object, passed as one of the service attributes. Whenever a buyer performs a lookup using the market, the buyer submits his utility function and its parameter search engine, both wrapped in a Buyer Valuation (BV) object. This is used by the market to create a buyer agent that performs the economic search for the optimal seller and parameters. Finally, the market returns a proxy to the optimal seller as the result of the lookup request. Additionally, a Final Contract object encapsulating the closed deal is created and sent to the buyer and to the seller. We have implemented two distinct markets, corresponding to the two payment methods described in the blueprint (first or second price). 3.2.3 The seller The seller is a Jini service provider (thus additionally implementing the ESI interface). When joining a market, the seller should submit its quote function encapsulated in the SC object. Note that the quote function may be implemented as an arbitrary Java method; the method’s code is actually transferred to the market. In addition to the quote function, the SC object also includes the fixed values of all seller parameters as well some timing-control information to be described below. Using the ESI interface, the seller receives online notifications of all Final Contracts (described below) and must be able to update and resubmit its SC object to the market. When the seller is actually invoked by a buyer using a previously obtained proxy, it must verify the validity of the corresponding FC. 3.2.4 The buyer The buyer is a simple Jini client that in a lookup request sends the market its BV object. The BV object encapsulates both the utility function and the parameter search engine that finds the best values for the buyer parameters (accessed through a getBuyerParameters() method). Both of these may be implemented as arbitrary Java methods whose code is transferred to the market and used as the buyers' agent. The parameter search engine may use well-known structure information regarding the parameter space of a particular service type, or may perform an exhaustive search of the parameter space. 3.2.5 Final contract The Final Contract (FC) is a sealed contract that represents a closed deal between a buyer and a seller; it is constructed by the market and contains the following: a unique identifier, the SC, the chosen Buyer Parameters (BP), and the payment details. The FC should, in principle, be digitally signed by the market (this is not currently implemented) and is to be used as the basis for the actual electronic fund transfers. 3.2.6 Time-control mechanisms Two mechanisms are supplied for ensuring that sellers' time-dependent quotes are used only if they are up to date. Every SC that is submitted to the market allows specifying a maximum number of contracts that may be created according to this SC. Whenever the maximum is reached, the market removes this seller from its list of service providers. Sellers with time-dependent quotes can specify a low maximum and then must periodically update their SC prior to this maximum being reached. In addition, each FC contains a lease control object (LCO) that ensures sellers that their services will be invoked by buyers within a predefined time interval. When the lease expires, the service proxy can no longer be used and an exception is raised. 3.3 Participants obligations There are some implicit contracts (commitments and obligations) between all entities (players). The most significant assumption is that the market is trustable and accepted by all sides. The market can assign buyers to a seller, as long as its SC is valid. The SC is valid as long as the service is registered at the lookup service and the seller hasn't supplied the maximum number of contracts mentioned in his SC. The seller must provide the service according to the parameters published in the FC, as long as the FC is valid (i.e., the FC lease hasn't expired). The seller is required to change all necessary parameters inside its SC whenever relevant (time-dependent parameters). The buyer can execute the service during the lease duration, defined in the FC. Both seller and buyer accept the payment details described inside the FC. 4 Theoretical Analysis We describe a theoretical model that analyze our multiparameter market system. 4.1 The Model The Model is based on two types of players: a buyer and a seller, and a market place. Denote by $P$ the parameter space of a certain service type. Each Seller $S$, has: 1. A private cost function: $c_S: P \rightarrow R^+$. If $S$ cannot provide the service with parameters $p$, then $c_S(p) = \infty$. 2. A public quote function $q_S: P \rightarrow R^+$. This function is sent to the market. Each Buyer $B$, has two functions that are coupled together: 1. A utility function: $u_B: P \rightarrow R^+$. If the service with parameters $p$ is not acceptable at all to him, then $u_B(p) = 0$. 2. A function $g_B: Q \rightarrow P$, where $Q$ is the space of all possible quote functions. This function acts as the parameter search engine that finds appropriate parameters, given a quote function. See section 2.3 for explanations. **Definition 1.** $g_B$ is called optimal if $\forall q_S$, $g_B(q_S) \in \arg \max_{p \in P} \{u_B(p) - q_S(p)\}$. I.e., $g_B$ finds the parameters that maximize the buyer surplus. 4.1.1 The assignment mechanism The market holds the current quotes from all sellers $\{q_S\}$, and when it receives a request from a buyer $B$, $(u_B, g_B)$, it attempts matching this request to the best seller and choosing the best parameter values. The optimization criteria is the surplus. For a given supplier $S$, the parameters $p$ are chosen by the buyer's parameter search engine: $p^*_B,S = g_B(q_S)$. The optimal seller is chosen as to optimize the surplus under these parameters: $S^* = \arg \max_s \{ \text{surplus}_{B,S}(p^*_B,S) \}$. If surplus_{B,S^*} \leq 0 then the request is denied. Otherwise, the market fixes a payment $d_{B,S^*}$ according to one of the following payment methods: - **first price:** $d_{B,S^*} = q_{S^*}(p^*_B,S^*)$ - **second price:** let $S^2 = \arg \max_{s \neq S^*} \{ \text{surplus}_{B,S} \}$, $d_{B,S^*} = u_B(p^*_B,S^*) - \text{surplus}_{B,S}(p^*_B,S)$. If surplus_{B,S^2} \leq 0 or $S^2$ does not exist then $d_{B,S^*} = u_B(p^*_B,S^*)$. Finally, the market outputs the assignment $B \leftrightarrow S^*$ and the payment $d_{B,S^*}$. We assume the following: (1) Several sellers have registered at the market and can alter their quote functions at any moment. (2) Buyers arrive to the market online and immediately receive a seller assignment. ### 4.2 Model Properties We show that this model has the following properties: incentive compatibility, allocation efficiency and load balancing. Our analysis uses game theoretic notions that are standard in the field of mechanism design (see [23, 30]). Proofs for all theorems can be found in the full version of the paper [22]. **Definition 2. (seller gain)** For a fixed buyer $(u_B,g_B)$ the gain of seller $S$ is $$gains(q_S,q_{-S}) = \begin{cases} d_{B,S} - c_S(p) & \text{if } B \text{ gets } S \text{ with } p \\ 0 & \text{otherwise} \end{cases}$$ where $q_{-S}$ is a vector of the quotes of all other sellers. **Definition 3. (Dominant strategy)** A strategy (quote) $q_S$ of seller $S$ is called dominant if for every other declared quote $q_{-S}$ and for every declarations of all other players $q_{-S}$, $gains(q_S,q_{-S}) \geq gains(q_{-S},q_{-S})$. **Definition 4. (Incentive Compatibility)** A market mechanism will be called incentive compatible if declaring the true cost function ($q_S = c_S$) is a dominant strategy for all sellers. *Note*: We do not discuss in this paper incentive compatibility for buyers as it is known that this can not be achieved concurrently with incentive compatibility of sellers as known from the analysis of bilateral trade [23]. **Theorem 4.1.** Assume that (1) the assignment of services does not cause changes in any $c_S$ and (2) $g_B$ is optimal for all buyers, then the second price MAJIC mechanism is incentive compatible. *Remark*. According to [29], when $g_B$ is not optimal, the VCG mechanism is not incentive compatible. Nevertheless, there are methods that achieve feasible approximation to incentive compatibility; such methods are described in [20]. **Lemma.** Under assumptions of theorem 4.1, $\forall u_B \forall q_S \forall q_{-S}$ $gains(c_S,q_{-S}) \geq gains(q_{-S},q_{-S})$. **Definition 5.** The total welfare achieved by the market is $\sum_{B \in S} gains_B(u_B(p) - c_S(p))$. **Theorem 4.2.** Assume that (1) for all sellers $q_S = c_S$; (2) for each buyer $g_B$ is optimal; (3) the assignment of services does not cause changes in any $c_S$. Then, for every sequence of service requests the total welfare is optimized by the market's allocation. We can show that the load balancing achieved by this economic-based model is good in many situations. Specifically, if all service suppliers are identical, then we would expect for almost uniform allocation of work among the suppliers. **Theorem 4.3.** Assume that (1) All the service providers are identical; (2) All buyers place a positive utility on faster service-time; (3) All quotes are correctly updated to reflect to the service suppliers true load. Then, the allocation obtained by the market achieves a makespan (last completion time of a service) that is within a factor of 2 w.r.t. the optimal allocation. It is shown in [2] that no algorithm can achieve a better competitive ratio. As usual, and as demonstrated by our experiments, typical behavior is much better and applies in a wider class of situations. ## 5 Experimental results We have performed several kinds of tests on the MAJIC system: the system performance overhead, the load balancing effect, and the resource allocation efficiency. We have created two types of services and corresponding clients: a trivial service, called Simple, and a complex one, called Printer. The simple service has a single parameter (price) and the corresponding client has a fixed utility function. The printer service has several parameters: price per page, service-time (the time that the client request can be served), quality and speed. The service-time parameter varies with time to achieve simulation of time dependent services. From these results, we see indeed that the MAJIC overhead is not prohibitive: in the low load case, we observed a 50% MAJIC overhead, while in the high load case we observed only 15% overhead. The difference between the two cases can be explained by the fact that in the high load scenario most of the lookup time is spent on waiting for entering the lookup service and therefore the MAJIC overhead is less significant. 5.1 Testing environment description We have built a network based testing environment that enables us to execute services and clients on several machines simultaneously. The entities (lookup service, services and clients) and their parameters can be externally configured. We have used a single machine (PIII-600 processor, 128KB RAM, WinNT 4.0 OS) for invoking the lookup service and the service providers, and another machine (PIIII-600 processor, 2GB RAM, Linux OS) for invoking the clients. The machines were connected by LAN (Ethernet 10Mb/s). The flow of events of each test was as follows: (1) Activating a specific type of lookup service. (2) Activating the required service providers. (3) Activating the clients with configurable time interval between their invocations. (4) The clients initiate the lookup protocol and eventually invoke the given service. 5.2 Results 5.2.1 System performance The performance of the system has been measured by the lookup protocol response time, since it is the most significant difference between the Jini and the MAJIC systems. This is measured at the buyer side and contains the lookup search time and the network latency. Fig 2 shows the performance results in high load scenario (no time interval between clients). In low load scenarios the results are similar, but the relative overhead is larger (see the full version of the paper [22]). The tests have been performed using 1000 clients and variable number of “simple” services. The main purpose of this test is to examine the MAJIC system overhead due to the market mechanism (including the parameter search engine) and the additional message (FC) that is being sent to the chosen seller after every assignment. We should emphasis that the additional overhead is only for the lookup service itself, which is normally insignificant compared to invocations of services. 5.2.2 Load balancing The load balancing tests have been performed on 8 printer services with 500 clients (500ms time interval between clients invocations). The services were identical in all parameters. Each seller’s service-time was constantly updated to reflect its current load. Each buyer had a 60 ms job duration and a utility function: \( u_B = 120 - 0.05 \cdot service \cdot time \). Fig 3 shows the assignments of clients to services by the MAJIC system (for example, service number 1 was assigned to 64 clients). The average number of clients assigned to a service is 62.5, moreover, the maximal deviation from the average is 10%. Pure online algorithms geared to load balancing, which are 2-competitive, show similar results [12]. 5.2.3 Resource allocation efficiency In order to demonstrate resource allocation efficiency, we chose to perform tests on the quality attribute of a printer service. We have designed a system that contains printer services with different printing qualities. In this system, each buyer had a particular preference for a printing quality. We expected from the MAJIC system to assign buyers with preference for higher quality to high quality printers and vice versa. We have used 3 printer services with the following qualities: High (quality=150, price=15), Medium (quality=100, price=10), and Low (quality=50, price=5). Note that as quality decreases the printing price decreases as well. Each buyer has a parameter, $f_q$, which is a continuous quality factor that is chosen uniformly in the range of $[0,1]$. This factor represents the buyer’s preference for printing quality. We activated 100 clients using the following utility function: $u_B = 40 + f_q \cdot quality$. As $f_q$ increases, the buyer utility grows faster with quality. Thus, we expect that as the quality factor gets higher, the client will tend to be assigned to higher quality printer, although it charges higher price. In Fig 4, we show buyers assignments on the described services with respect to their quality. As we can see, buyer with higher quality factor is assigned to higher quality service. For example, the buyer with $f_q = 0.8$ is assigned to the High quality service. In addition, we have tested the same scenario when the printer service-time had also been taken into consideration: $u_B = 100 + f_q \cdot quality - 0.25 \cdot service\_time$. As we can see in Fig 5, the service-time parameter caused a load balancing effect; when one of the services was loaded, the clients that were supposed to be assigned to this service have been assigned to an adjacent service instead. We observe that even when the system manifests load balancing, the clients’ quality preferences still effect the assignments. 6 Conclusions We have introduced a blueprint for an infrastructure that performs on line auctions for computer services over distributed object systems. We implemented such a system on top of Sun’s Jini system. We have presented both theoretical and initial empirical studies showing the efficiency of such systems. Two major aspects of our architecture distinguishes our work from previous related systems: (1) we provide a general-purpose architecture as opposed to previous economically-based systems that were dedicated to a single resources. (2) Our infrastructure handles a multiparameter space in a non trivial way. The main future test that should be applied to our system is using it on a large scale for some specific service types. This way, we can examine some parameters of the MAJIC system: the system efficiency, possible implementations of parameters search engines, integration with existing security environments, etc. Acknowledgement. We thank Ron Lavi, Ahura Mu‘alem and Zinovi Rabinovich for their helpful comments. We thank Yoav Etsion and Motti Beaton for technical assistance. References
{"Source-Url": "https://www.usenix.org/legacy/events/usits01/full_papers/levy/levy.pdf", "len_cl100k_base": 8819, "olmocr-version": "0.1.47", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 62198, "total-output-tokens": 11685, "length": "2e13", "weborganizer": {"__label__adult": 0.000400543212890625, "__label__art_design": 0.0006251335144042969, "__label__crime_law": 0.00044798851013183594, "__label__education_jobs": 0.0008630752563476562, "__label__entertainment": 0.00017011165618896484, "__label__fashion_beauty": 0.00019347667694091797, "__label__finance_business": 0.0022373199462890625, "__label__food_dining": 0.00043845176696777344, "__label__games": 0.0007762908935546875, "__label__hardware": 0.0019197463989257812, "__label__health": 0.0008821487426757812, "__label__history": 0.0005054473876953125, "__label__home_hobbies": 0.0001468658447265625, "__label__industrial": 0.0007357597351074219, "__label__literature": 0.0005335807800292969, "__label__politics": 0.0005121231079101562, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.278564453125, "__label__social_life": 0.00010633468627929688, "__label__software": 0.023773193359375, "__label__software_dev": 0.68408203125, "__label__sports_fitness": 0.00025177001953125, "__label__transportation": 0.0008273124694824219, "__label__travel": 0.000324249267578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47851, 0.03406]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47851, 0.18501]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47851, 0.89328]], "google_gemma-3-12b-it_contains_pii": [[0, 3041, false], [3041, 7600, null], [7600, 12718, null], [12718, 17815, null], [17815, 22498, null], [22498, 26365, null], [26365, 30408, null], [30408, 34801, null], [34801, 38276, null], [38276, 40224, null], [40224, 44092, null], [44092, 47851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3041, true], [3041, 7600, null], [7600, 12718, null], [12718, 17815, null], [17815, 22498, null], [22498, 26365, null], [26365, 30408, null], [30408, 34801, null], [34801, 38276, null], [38276, 40224, null], [40224, 44092, null], [44092, 47851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47851, null]], "pdf_page_numbers": [[0, 3041, 1], [3041, 7600, 2], [7600, 12718, 3], [12718, 17815, 4], [17815, 22498, 5], [22498, 26365, 6], [26365, 30408, 7], [30408, 34801, 8], [34801, 38276, 9], [38276, 40224, 10], [40224, 44092, 11], [44092, 47851, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47851, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
a6f321657cdebe03339d1c1d62179e96bd7bea5c
Cpplib Internals For GCC version 10.0.1 (pre-release) (GCC) Neil Booth Table of Contents Conventions ...................................................... 1 The Lexer .......................................................... 3 Overview ......................................................... 3 Lexing a token .................................................. 3 Lexing a line .................................................... 5 Hash Nodes ......................................................... 9 Macro Expansion Algorithm ................................. 11 Internal representation of macros ....................... 11 Macro expansion overview ................................. 11 Scanning the replacement list for macros to expand .... 12 Looking for a function-like macro’s opening parenthesis .... 13 Marking tokens ineligible for future expansion .......... 13 Token Spacing ...................................................... 15 Line numbering ..................................................... 17 Just which line number anyway? ......................... 17 Representation of line numbers ......................... 17 The Multiple-Include Optimization ......................... 19 File Handling ....................................................... 21 Concept Index ...................................................... 23 Conventions cpllib has two interfaces—one is exposed internally only, and the other is for both internal and external use. The convention is that functions and types that are exposed to multiple files internally are prefixed with ‘_cpp_’, and are to be found in the file ‘internal.h’. Functions and types exposed to external clients are in ‘cpplib.h’, and prefixed with ‘cpp_’. For historical reasons this is no longer quite true, but we should strive to stick to it. We are striving to reduce the information exposed in ‘cpplib.h’ to the bare minimum necessary, and then to keep it there. This makes clear exactly what external clients are entitled to assume, and allows us to change internals in the future without worrying whether library clients are perhaps relying on some kind of undocumented implementation-specific behavior. The Lexer Overview The lexer is contained in the file ‘lex.c’. It is a hand-coded lexer, and not implemented as a state machine. It can understand C, C++ and Objective-C source code, and has been extended to allow reasonably successful preprocessing of assembly language. The lexer does not make an initial pass to strip out trigraphs and escaped newlines, but handles them as they are encountered in a single pass of the input file. It returns preprocessing tokens individually, not a line at a time. It is mostly transparent to users of the library, since the library’s interface for obtaining the next token, cpp_get_token, takes care of lexing new tokens, handling directives, and expanding macros as necessary. However, the lexer does expose some functionality so that clients of the library can easily spell a given token, such as cpp_spell_token and cpp_token_len. These functions are useful when generating diagnostics, and for emitting the preprocessed output. Lexing a token Lexing of an individual token is handled by _cpp_lex_direct and its subroutines. In its current form the code is quite complicated, with read ahead characters and such-like, since it strives to not step back in the character stream in preparation for handling non-ASCII file encodings. The current plan is to convert any such files to UTF-8 before processing them. This complexity is therefore unnecessary and will be removed, so I’ll not discuss it further here. The job of _cpp_lex_direct is simply to lex a token. It is not responsible for issues like directive handling, returning lookahead tokens directly, multiple-include optimization, or conditional block skipping. It necessarily has a minor rôle to play in memory management of lexed lines. I discuss these issues in a separate section (see [Lexing a line], page 5). The lexer places the token it lexes into storage pointed to by the variable cur_token, and then increments it. This variable is important for correct diagnostic positioning. Unless a specific line and column are passed to the diagnostic routines, they will examine the line and col values of the token just before the location that cur_token points to, and use that location to report the diagnostic. The lexer does not consider whitespace to be a token in its own right. If whitespace (other than a new line) precedes a token, it sets the PREV_WHITE bit in the token’s flags. Each token has its line and col variables set to the line and column of the first character of the token. This line number is the line number in the translation unit, and can be converted to a source (file, line) pair using the line map code. The first token on a logical, i.e. unescaped, line has the flag BOL set for beginning-of-line. This flag is intended for internal use, both to distinguish a ‘#’ that begins a directive from one that doesn’t, and to generate a call-back to clients that want to be notified about the start of every non-directive line with tokens on it. Clients cannot reliably determine this for themselves: the first token might be a macro, and the tokens of a macro expansion do not have the BOL flag set. The macro expansion may even be empty, and the next token on the line certainly won’t have the BOL flag set. New lines are treated specially; exactly how the lexer handles them is context-dependent. The C standard mandates that directives are terminated by the first unescaped newline character, even if it appears in the middle of a macro expansion. Therefore, if the state variable in_directive is set, the lexer returns a CPP_EOF token, which is normally used to indicate end-of-file, to indicate end-of-directive. In a directive a CPP_EOF token never means end-of-file. Conveniently, if the caller was collect_args, it already handles CPP_EOF as if it were end-of-file, and reports an error about an unterminated macro argument list. The C standard also specifies that a new line in the middle of the arguments to a macro is treated as whitespace. This white space is important in case the macro argument is stringized. The state variable parsing_args is nonzero when the preprocessor is collecting the arguments to a macro call. It is set to 1 when looking for the opening parenthesis to a function-like macro, and 2 when collecting the actual arguments up to the closing parenthesis, since these two cases need to be distinguished sometimes. One such time is here: the lexer sets the PREV_WHITE flag of a token if it meets a new line when parsing_args is set to 2. It doesn’t set it if it meets a new line when parsing_args is 1, since then code like ```c #define foo() bar foo baz ``` would be output with an erroneous space before ‘baz’: ```c foo baz ``` This is a good example of the subtlety of getting token spacing correct in the preprocessor; there are plenty of tests in the testsuite for corner cases like this. The lexer is written to treat each of ‘\r’, ‘\n’, ‘\r\n’ and ‘\n\r’ as a single newline indicator. This allows it to transparently preprocess MS-DOS, Macintosh and Unix files without their needing to pass through a special filter beforehand. We also decided to treat a backslash, either ‘\’ or the trigraph ‘??/’, separated from one of the above newline indicators by non-comment whitespace only, as intending to escape the newline. It tends to be a typing mistake, and cannot reasonably be mistaken for anything else in any of the C-family grammars. Since handling it this way is not strictly conforming to the ISO standard, the library issues a warning wherever it encounters it. Handling newlines like this is made simpler by doing it in one place only. The function handle_newline takes care of all newline characters, and skip_escaped_newlines takes care of arbitrarily long sequences of escaped newlines, deferring to handle_newline to handle the newlines themselves. The most painful aspect of lexing ISO-standard C and C++ is handling trigraphs and backlash-escaped newlines. Trigraphs are processed before any interpretation of the meaning of a character is made, and unfortunately there is a trigraph representation for a backslash, so it is possible for the trigraph ‘??/’ to introduce an escaped newline. Escaped newlines are tedious because theoretically they can occur anywhere—between the ‘+’ and ‘=’ of the ‘+=” token, within the characters of an identifier, and even between the ‘*’ and ‘/’ that terminates a comment. Moreover, you cannot be sure there is just one—there might be an arbitrarily long sequence of them. So, for example, the routine that lexes a number, parse_number, cannot assume that it can scan forwards until the first non-number character and be done with it, because this could be the ‘\’ introducing an escaped newline, or the ‘?’ introducing the trigraph sequence that represents the ‘\’ of an escaped newline. If it encounters a ‘?’ or ‘\’, it calls \texttt{skip\_escaped\_newlines} to skip over any potential escaped newlines before checking whether the number has been finished. Similarly code in the main body of \_cpp\_lex\_direct cannot simply check for a ‘=’ after a ‘+’ character to determine whether it has a ‘+=’ token; it needs to be prepared for an escaped newline of some sort. Such cases use the function \texttt{get\_effective\_char}, which returns the first character after any intervening escaped newlines. The lexer needs to keep track of the correct column position, including counting tabs as specified by the ‘-ftabstop’ option. This should be done even within C-style comments; they can appear in the middle of a line, and we want to report diagnostics in the correct position for text appearing after the end of the comment. Some identifiers, such as \_VA\_ARGS\_ and poisoned identifiers, may be invalid and require a diagnostic. However, if they appear in a macro expansion we don’t want to complain with each use of the macro. It is therefore best to catch them during the lexing stage, in \texttt{parse\_identifier}. In both cases, whether a diagnostic is needed or not is dependent upon the lexer’s state. For example, we don’t want to issue a diagnostic for re-poisoning a poisoned identifier, or for using \_VA\_ARGS\_ in the expansion of a variable-argument macro. Therefore \texttt{parse\_identifier} makes use of state flags to determine whether a diagnostic is appropriate. Since we change state on a per-token basis, and don’t lex whole lines at a time, this is not a problem. Another place where state flags are used to change behavior is whilst lexing header names. Normally, a ‘<’ would be lexed as a single token. After a \#include directive, though, it should be lexed as a single token as far as the nearest ‘>’ character. Note that we don’t allow the terminators of header names to be escaped; the first ‘”’ or ‘>’ terminates the header name. Interpretation of some character sequences depends upon whether we are lexing C, C++ or Objective-C, and on the revision of the standard in force. For example, ‘::’ is a single token in C++, but in C it is two separate ‘:’ tokens and almost certainly a syntax error. Such cases are handled by \_cpp\_lex\_direct based upon command-line flags stored in the \texttt{cpp\_options} structure. Once a token has been lexed, it leads an independent existence. The spelling of numbers, identifiers and strings is copied to permanent storage from the original input buffer, so a token remains valid and correct even if its source buffer is freed with \_cpp\_pop\_buffer. The storage holding the spellings of such tokens remains until the client program calls \texttt{cpp\_destroy}, probably at the end of the translation unit. **Lexing a line** When the preprocessor was changed to return pointers to tokens, one feature I wanted was some sort of guarantee regarding how long a returned pointer remains valid. This is important to the stand-alone preprocessor, the future direction of the C family front ends, and even to cpplib itself internally. Occasionally the preprocessor wants to be able to peek ahead in the token stream. For example, after the name of a function-like macro, it wants to check the next token to see if it is an opening parenthesis. Another example is that, after reading the first few tokens of a #pragma directive and not recognizing it as a registered pragma, it wants to backtrack and allow the user-defined handler for unknown pragmas to access the full #pragma token stream. The stand-alone preprocessor wants to be able to test the current token with the previous one to see if a space needs to be inserted to preserve their separate tokenization upon re-lexing (paste avoidance), so it needs to be sure the pointer to the previous token is still valid. The recursive-descent C++ parser wants to be able to perform tentative parsing arbitrarily far ahead in the token stream, and then to be able to jump back to a prior position in that stream if necessary. The rule I chose, which is fairly natural, is to arrange that the preprocessor lex all tokens on a line consecutively into a token buffer, which I call a token run, and when meeting an unescaped new line (newlines within comments do not count either), to start lexing back at the beginning of the run. Note that we do not lex a line of tokens at once; if we did that parse_identifier would not have state flags available to warn about invalid identifiers (see [Invalid identifiers], page 5). In other words, accessing tokens that appeared earlier in the current line is valid, but since each logical line overwrites the tokens of the previous line, tokens from prior lines are unavailable. In particular, since a directive only occupies a single logical line, this means that the directive handlers like the #pragma handler can jump around in the directive’s tokens if necessary. Two issues remain: what about tokens that arise from macro expansions, and what happens when we have a long line that overflows the token run? Since we promise clients that we preserve the validity of pointers that we have already returned for tokens that appeared earlier in the line, we cannot reallocate the run. Instead, on overflow it is expanded by chaining a new token run on to the end of the existing one. The tokens forming a macro’s replacement list are collected by the #define handler, and placed in storage that is only freed by cpp_destroy. So if a macro is expanded in the line of tokens, the pointers to the tokens of its expansion that are returned will always remain valid. However, macros are a little trickier than that, since they give rise to three sources of fresh tokens. They are the built-in macros like __LINE__, and the ‘#’ and ‘##’ operators for stringizing and token pasting. I handled this by allocating space for these tokens from the lexer’s token run chain. This means they automatically receive the same lifetime guarantees as lexed tokens, and we don’t need to concern ourselves with freeing them. Lexing into a line of tokens solves some of the token memory management issues, but not all. The opening parenthesis after a function-like macro name might lie on a different line, and the front ends definitely want the ability to look ahead past the end of the current line. So cppplib only moves back to the start of the token run at the end of a line if the variable keep_tokens is zero. Line-buffering is quite natural for the preprocessor, and as a result the only time cppplib needs to increment this variable is whilst looking for the opening parenthesis to, and reading the arguments of, a function-like macro. In the near future cppplib will export an interface to increment and decrement this variable, so that clients can share full control over the lifetime of token pointers too. The routine _cpp_lex_token handles moving to new token runs, calling _cpp_lex_direct to lex new tokens, or returning previously-lexed tokens if we stepped back in the token stream. It also checks each token for the BOL flag, which might indicate a directive that needs to be handled, or require a start-of-line call-back to be made. _cpp_lex_token also handles skipping over tokens in failed conditional blocks, and invalidates the control macro of the multiple-include optimization if a token was successfully lexed outside a directive. In other words, its callers do not need to concern themselves with such issues. Hash Nodes When cpplib encounters an “identifier”, it generates a hash code for it and stores it in the hash table. By “identifier” we mean tokens with type CPP_NAME; this includes identifiers in the usual C sense, as well as keywords, directive names, macro names and so on. For example, all of \texttt{pragma}, \texttt{int, foo} and \texttt{__GNUC__} are identifiers and hashed when lexed. Each node in the hash table contain various information about the identifier it represents. For example, its length and type. At any one time, each identifier falls into exactly one of three categories: - **Macros** These have been declared to be macros, either on the command line or with \texttt{#define}. A few, such as \texttt{__TIME__} are built-ins entered in the hash table during initialization. The hash node for a normal macro points to a structure with more information about the macro, such as whether it is function-like, how many arguments it takes, and its expansion. Built-in macros are flagged as special, and instead contain an enum indicating which of the various built-in macros it is. - **Assertions** Assertions are in a separate namespace to macros. To enforce this, cpp actually prepends a \texttt{#} character before hashing and entering it in the hash table. An assertion’s node points to a chain of answers to that assertion. - **Void** Everything else falls into this category—an identifier that is not currently a macro, or a macro that has since been undefined with \texttt{#undef}. When preprocessing C++, this category also includes the named operators, such as \texttt{xor}. In expressions these behave like the operators they represent, but in contexts where the spelling of a token matters they are spelt differently. This spelling distinction is relevant when they are operands of the stringizing and pasting macro operators \texttt{#} and \texttt{##}. Named operator hash nodes are flagged, both to catch the spelling distinction and to prevent them from being defined as macros. The same identifiers share the same hash node. Since each identifier token, after lexing, contains a pointer to its hash node, this is used to provide rapid lookup of various information. For example, when parsing a \texttt{#define} statement, CPP flags each argument’s identifier hash node with the index of that argument. This makes duplicated argument checking an O(1) operation for each argument. Similarly, for each identifier in the macro’s expansion, lookup to see if it is an argument, and which argument it is, is also an O(1) operation. Further, each directive name, such as \texttt{endif}, has an associated directive enum stored in its hash node, so that directive lookup is also O(1). Macro Expansion Algorithm Macro expansion is a tricky operation, fraught with nasty corner cases and situations that render what you thought was a nifty way to optimize the preprocessor’s expansion algorithm wrong in quite subtle ways. I strongly recommend you have a good grasp of how the C and C++ standards require macros to be expanded before diving into this section, let alone the code!. If you don’t have a clear mental picture of how things like nested macro expansion, stringizing and token pasting are supposed to work, damage to your sanity can quickly result. Internal representation of macros The preprocessor stores macro expansions in tokenized form. This saves repeated lexing passes during expansion, at the cost of a small increase in memory consumption on average. The tokens are stored contiguously in memory, so a pointer to the first one and a token count is all you need to get the replacement list of a macro. If the macro is a function-like macro the preprocessor also stores its parameters, in the form of an ordered list of pointers to the hash table entry of each parameter’s identifier. Further, in the macro’s stored expansion each occurrence of a parameter is replaced with a special token of type CPP_MACRO_ARG. Each such token holds the index of the parameter it represents in the parameter list, which allows rapid replacement of parameters with their arguments during expansion. Despite this optimization it is still necessary to store the original parameters to the macro, both for dumping with e.g., ‘-dD’, and to warn about non-trivial macro redefinitions when the parameter names have changed. Macro expansion overview The preprocessor maintains a context stack, implemented as a linked list of cpp_context structures, which together represent the macro expansion state at any one time. The struct cpp_reader member variable context points to the current top of this stack. The top normally holds the unexpanded replacement list of the innermost macro under expansion, except when cpplib is about to pre-expand an argument, in which case it holds that argument’s unexpanded tokens. When there are no macros under expansion, cpplib is in base context. All contexts other than the base context contain a contiguous list of tokens delimited by a starting and ending token. When not in base context, cpplib obtains the next token from the list of the top context. If there are no tokens left in the list, it pops that context off the stack, and subsequent ones if necessary, until an unexhausted context is found or it returns to base context. In base context, cpplib reads tokens directly from the lexer. If it encounters an identifier that is both a macro and enabled for expansion, cpplib prepares to push a new context for that macro on the stack by calling the routine enter_macro_context. When this routine returns, the new context will contain the unexpanded replacement list of the innermost macro under expansion, except when cpplib is about to pre-expand an argument, in which case it holds that argument’s unexpanded tokens. In the case of function-like macros, enter_macro_context also replaces any parameters in the replacement list, stored as CPP_MACRO_ARG tokens, with the appropriate macro argument. If the standard requires that the parameter be replaced with its expanded argument, the argument will have been fully macro expanded first. **enter_macro_context** also handles special macros like **__LINE__**. Although these macros expand to a single token which cannot contain any further macros, for reasons of token spacing (see [Token Spacing], page 15) and simplicity of implementation, cpplib handles these special macros by pushing a context containing just that one token. The final thing that **enter_macro_context** does before returning is to mark the macro disabled for expansion (except for special macros like **__TIME__**). The macro is re-enabled when its context is later popped from the context stack, as described above. This strict ordering ensures that a macro is disabled whilst its expansion is being scanned, but that it is not disabled whilst any arguments to it are being expanded. ### Scanning the replacement list for macros to expand The C standard states that, after any parameters have been replaced with their possibly-expanded arguments, the replacement list is scanned for nested macros. Further, any identifiers in the replacement list that are not expanded during this scan are never again eligible for expansion in the future, if the reason they were not expanded is that the macro in question was disabled. Clearly this latter condition can only apply to tokens resulting from argument pre-expansion. Other tokens never have an opportunity to be re-tested for expansion. It is possible for identifiers that are function-like macros to not expand initially but to expand during a later scan. This occurs when the identifier is the last token of an argument (and therefore originally followed by a comma or a closing parenthesis in its macro’s argument list), and when it replaces its parameter in the macro’s replacement list, the subsequent token happens to be an opening parenthesis (itself possibly the first token of an argument). It is important to note that when cpplib reads the last token of a given context, that context still remains on the stack. Only when looking for the next token do we pop it off the stack and drop to a lower context. This makes backing up by one token easy, but more importantly ensures that the macro corresponding to the current context is still disabled when we are considering the last token of its replacement list for expansion (or indeed expanding it). As an example, which illustrates many of the points above, consider ```c #define foo(x) bar x foo(foo) (2) ``` which fully expands to ‘`bar foo (2)`’. During pre-expansion of the argument, ‘`foo`’ does not expand even though the macro is enabled, since it has no following parenthesis [pre-expansion of an argument only uses tokens from that argument; it cannot take tokens from whatever follows the macro invocation]. This still leaves the argument token ‘`foo`’ eligible for future expansion. Then, when re-scanning after argument replacement, the token ‘`foo`’ is rejected for expansion, and marked ineligible for future expansion, since the macro is now disabled. It is disabled because the replacement list ‘`bar foo`’ of the macro is still on the context stack. If instead the algorithm looked for an opening parenthesis first and then tested whether the macro were disabled it would be subtly wrong. In the example above, the replacement list of ‘`foo`’ would be popped in the process of finding the parenthesis, re-enabling ‘`foo`’ and expanding it a second time. Looking for a function-like macro’s opening parenthesis Function-like macros only expand when immediately followed by a parenthesis. To do this cpplib needs to temporarily disable macros and read the next token. Unfortunately, because of spacing issues (see [Token Spacing], page 15), there can be fake padding tokens in-between, and if the next real token is not a parenthesis cpplib needs to be able to back up that one token as well as retain the information in any intervening padding tokens. Backing up more than one token when macros are involved is not permitted by cpplib, because in general it might involve issues like restoring popped contexts onto the context stack, which are too hard. Instead, searching for the parenthesis is handled by a special function, funlike_invocation_p, which remembers padding information as it reads tokens. If the next real token is not an opening parenthesis, it backs up that one token, and then pushes an extra context just containing the padding information if necessary. Marking tokens ineligible for future expansion As discussed above, cpplib needs a way of marking tokens as unexpandable. Since the tokens cpplib handles are read-only once they have been lexed, it instead makes a copy of the token and adds the flag NO_EXPAND to the copy. For efficiency and to simplify memory management by avoiding having to remember to free these tokens, they are allocated as temporary tokens from the lexer’s current token run (see [Lexing a line], page 5) using the function _cpp_temp_token. The tokens are then re-used once the current line of tokens has been read in. This might sound unsafe. However, tokens runs are not re-used at the end of a line if it happens to be in the middle of a macro argument list, and cpplib only wants to back-up more than one lexer token in situations where no macro expansion is involved, so the optimization is safe. Token Spacing First, consider an issue that only concerns the stand-alone preprocessor: there needs to be a guarantee that re-reading its preprocessed output results in an identical token stream. Without taking special measures, this might not be the case because of macro substitution. For example: ``` #define PLUS + #define EMPTY #define f(x) =x= +PLUS -EMPTY- PLUS+ f(=) ``` ⇒ + + - - + + = = not ⇒ ++ -- ++ === One solution would be to simply insert a space between all adjacent tokens. However, we would like to keep space insertion to a minimum, both for aesthetic reasons and because it causes problems for people who still try to abuse the preprocessor for things like Fortran source and Makefiles. For now, just notice that when tokens are added (or removed, as shown by the EMPTY example) from the original lexed token stream, we need to check for accidental token pasting. We call this paste avoidance. Token addition and removal can only occur because of macro expansion, but accidental pasting can occur in many places: both before and after each macro replacement, each argument replacement, and additionally each token created by the ‘#’ and ‘##’ operators. Look at how the preprocessor gets whitespace output correct normally. The cpp_token structure contains a flags byte, and one of those flags is PREV_WHITE. This is flagged by the lexer, and indicates that the token was preceded by whitespace of some form other than a new line. The stand-alone preprocessor can use this flag to decide whether to insert a space between tokens in the output. Now consider the result of the following macro expansion: ``` #define add(x, y, z) x + y +z; sum = add (1,2, 3); ⇒ sum = 1 + 2 +3; ``` The interesting thing here is that the tokens ‘1’ and ‘2’ are output with a preceding space, and ‘3’ is output without a preceding space, but when lexed none of these tokens had that property. Careful consideration reveals that ‘1’ gets its preceding whitespace from the space preceding ‘add’ in the macro invocation, not replacement list. ‘2’ gets its whitespace from the space preceding the parameter ‘y’ in the macro replacement list, and ‘3’ has no preceding space because parameter ‘z’ has none in the replacement list. Once lexed, tokens are effectively fixed and cannot be altered, since pointers to them might be held in many places, in particular by in-progress macro expansions. So instead of modifying the two tokens above, the preprocessor inserts a special token, which I call a padding token, into the token stream to indicate that spacing of the subsequent token is special. The preprocessor inserts padding tokens in front of every macro expansion and expanded macro argument. These point to a source token from which the subsequent real token should inherit its spacing. In the above example, the source tokens are ‘add’ in the macro invocation, and ‘y’ and ‘z’ in the macro replacement list, respectively. It is quite easy to get multiple padding tokens in a row, for example if a macro’s first replacement token expands straight into another macro. ```c #define foo bar #define bar baz [foo] → [baz] ``` Here, two padding tokens are generated with sources the ‘foo’ token between the brackets, and the ‘bar’ token from foo’s replacement list, respectively. Clearly the first padding token is the one to use, so the output code should contain a rule that the first padding token in a sequence is the one that matters. But what if a macro expansion is left? Adjusting the above example slightly: ```c #define foo bar #define bar EMPTY baz #define EMPTY [foo] EMPTY; → [ baz]; ``` As shown, now there should be a space before ‘baz’ and the semicolon in the output. The rules we decided above fail for ‘baz’: we generate three padding tokens, one per macro invocation, before the token ‘baz’. We would then have it take its spacing from the first of these, which carries source token ‘foo’ with no leading space. It is vital that cpplib get spacing correct in these examples since any of these macro expansions could be stringized, where spacing matters. So, this demonstrates that not just entering macro and argument expansions, but leaving them requires special handling too. I made cpplib insert a padding token with a NULL source token when leaving macro expansions, as well as after each replaced argument in a macro’s replacement list. It also inserts appropriate padding tokens on either side of tokens created by the ‘#’ and ‘##’ operators. I expanded the rule so that, if we see a padding token with a NULL source token, and that source token has no leading space, then we behave as if we have seen no padding tokens at all. A quick check shows this rule will then get the above example correct as well. Now a relationship with paste avoidance is apparent: we have to be careful about paste avoidance in exactly the same locations we have padding tokens in order to get white space correct. This makes implementation of paste avoidance easy: wherever the stand-alone preprocessor is fixing up spacing because of padding tokens, and it turns out that no space is needed, it has to take the extra step to check that a space is not needed after all to avoid an accidental paste. The function `cpp_avoidPaste` advises whether a space is required between two consecutive tokens. To avoid excessive spacing, it tries hard to only require a space if one is likely to be necessary, but for reasons of efficiency it is slightly conservative and might recommend a space where one is not strictly needed. Line numbering Just which line number anyway? There are three reasonable requirements a cpplib client might have for the line number of a token passed to it: - The source line it was lexed on. - The line it is output on. This can be different to the line it was lexed on if, for example, there are intervening escaped newlines or C-style comments. For example: ```c foo /* A long comment */ bar baz ⇒ foo bar baz ``` - If the token results from a macro expansion, the line of the macro name, or possibly the line of the closing parenthesis in the case of function-like macro expansion. The `cpp_token` structure contains `line` and `col` members. The lexer fills these in with the line and column of the first character of the token. Consequently, but maybe unexpectedly, a token from the replacement list of a macro expansion carries the location of the token within the `#define` directive, because cpplib expands a macro by returning pointers to the tokens in its replacement list. The current implementation of cpplib assigns tokens created from built-in macros and the ‘#’ and ‘##’ operators the location of the most recently lexed token. This is because they are allocated from the lexer’s token runs, and because of the way the diagnostic routines infer the appropriate location to report. The diagnostic routines in cpplib display the location of the most recently lexed token, unless they are passed a specific line and column to report. For diagnostics regarding tokens that arise from macro expansions, it might also be helpful for the user to see the original location in the macro definition that the token came from. Since that is exactly the information each token carries, such an enhancement could be made relatively easily in future. The stand-alone preprocessor faces a similar problem when determining the correct line to output the token on: the position attached to a token is fairly useless if the token came from a macro expansion. All tokens on a logical line should be output on its first physical line, so the token’s reported location is also wrong if it is part of a physical line other than the first. To solve these issues, cpplib provides a callback that is generated whenever it lexes a preprocessing token that starts a new logical line other than a directive. It passes this token (which may be a `CPP_EOF` token indicating the end of the translation unit) to the callback routine, which can then use the line and column of this token to produce correct output. Representation of line numbers As mentioned above, cpplib stores with each token the line number that it was lexed on. In fact, this number is not the number of the line in the source file, but instead bears more resemblance to the number of the line in the translation unit. The preprocessor maintains a monotonic increasing line count, which is incremented at every new line character (and also at the end of any buffer that does not end in a new line). Since a line number of zero is useful to indicate certain special states and conditions, this variable starts counting from one. This variable therefore uniquely enumerates each line in the translation unit. With some simple infrastructure, it is straightforward to map from this to the original source file and line number pair, saving space whenever line number information needs to be saved. The code that implements this mapping lies in the files ‘line-map.c’ and ‘line-map.h’. Command-line macros and assertions are implemented by pushing a buffer containing the right hand side of an equivalent \#define or \#assert directive. Some built-in macros are handled similarly. Since these are all processed before the first line of the main input file, it will typically have an assigned line closer to twenty than to one. The Multiple-Include Optimization Header files are often of the form ```c #ifndef FOO #define FOO ... #endif ``` to prevent the compiler from processing them more than once. The preprocessor notices such header files, so that if the header file appears in a subsequent `#include` directive and `FOO` is defined, then it is ignored and it doesn’t preprocess or even re-open the file a second time. This is referred to as the *multiple include optimization*. Under what circumstances is such an optimization valid? If the file were included a second time, it can only be optimized away if that inclusion would result in no tokens to return, and no relevant directives to process. Therefore the current implementation imposes requirements and makes some allowances as follows: 1. There must be no tokens outside the controlling `#if`-`#endif` pair, but whitespace and comments are permitted. 2. There must be no directives outside the controlling directive pair, but the null directive (a line containing nothing other than a single ‘#’ and possibly whitespace) is permitted. 3. The opening directive must be of the form ```c ifndef FOO ``` or ```c if !defined FOO [equivalently, #if !defined(FOO)] ``` 4. In the second form above, the tokens forming the `#if` expression must have come directly from the source file—no macro expansion must have been involved. This is because macro definitions can change, and tracking whether or not a relevant change has been made is not worth the implementation cost. 5. There can be no `#else` or `#elif` directives at the outer conditional block level, because they would probably contain something of interest to a subsequent pass. First, when pushing a new file on the buffer stack, `_stack_include_file` sets the controlling macro `mi_cmacro` to `NULL`, and sets `mi_valid` to `true`. This indicates that the preprocessor has not yet encountered anything that would invalidate the multiple-include optimization. As described in the next few paragraphs, these two variables having these values effectively indicates top-of-file. When about to return a token that is not part of a directive, `_cpp_lex_token` sets `mi_valid` to `false`. This enforces the constraint that tokens outside the controlling conditional block invalidate the optimization. The `do_if`, when appropriate, and `do ifndef` directive handlers pass the controlling macro to the function `push_conditional`. cpplib maintains a stack of nested conditional blocks, and after processing every opening conditional this function pushes an `if_stack` structure onto the stack. In this structure it records the controlling macro for the block, provided there is one and we’re at top-of-file (as described above). If an `#elif` or `#else` directive is encountered, the controlling macro for that block is cleared to `NULL`. Otherwise, it survives until the `#endif` closing the block, upon which `do endif` sets `mi_valid` to true and stores the controlling macro in `mi_cmacro.` _cpp_handle_directive clears mi_valid when processing any directive other than an opening conditional and the null directive. With this, and requiring top-of-file to record a controlling macro, and no #else or #elif for it to survive and be copied to mi_cmacro by do_endif, we have enforced the absence of directives outside the main conditional block for the optimization to be on. Note that whilst we are inside the conditional block, mi_valid is likely to be reset to false, but this does not matter since the closing #endif restores it to true if appropriate. Finally, since _cpp_lex_direct pops the file off the buffer stack at EOF without returning a token, if the #endif directive was not followed by any tokens, mi_valid is true and _cpp_pop_file_buffer remembers the controlling macro associated with the file. Subsequent calls to stack_include_file result in no buffer being pushed if the controlling macro is defined, effecting the optimization. A quick word on how we handle the #if !defined FOO case. _cpp_parse_expr and parse_defined take steps to see whether the three stages ‘!’, ‘defined-expression’ and ‘end-of-directive’ occur in order in a #if expression. If so, they return the guard macro to do_if in the variable mi_ind_cmacro, and otherwise set it to NULL. enter_macro_context sets mi_valid to false, so if a macro was expanded whilst parsing any part of the expression, then the top-of-file test in push_conditional fails and the optimization is turned off. File Handling Fairly obviously, the file handling code of cpplib resides in the file ‘files.c’. It takes care of the details of file searching, opening, reading and caching, for both the main source file and all the headers it recursively includes. The basic strategy is to minimize the number of system calls. On many systems, the basic open() and fstat() system calls can be quite expensive. For every #include-d file, we need to try all the directories in the search path until we find a match. Some projects, such as glibc, pass twenty or thirty include paths on the command line, so this can rapidly become time consuming. For a header file we have not encountered before we have little choice but to do this. However, it is often the case that the same headers are repeatedly included, and in these cases we try to avoid repeating the filesystem queries whilst searching for the correct file. For each file we try to open, we store the constructed path in a splay tree. This path first undergoes simplification by the function _cpp_simplify_pathname. For example, ‘/usr/include/bits/../foo.h’ is simplified to ‘/usr/include/foo.h’ before we enter it in the splay tree and try to open() the file. CPP will then find subsequent uses of ‘foo.h’, even as ‘/usr/include/foo.h’, in the splay tree and save system calls. Further, it is likely the file contents have also been cached, saving a read() system call. We don’t bother caching the contents of header files that are re-inclusion protected, and whose re-inclusion macro is defined when we leave the header file for the first time. If the host supports it, we try to map suitably large files into memory, rather than reading them in directly. The include paths are internally stored on a null-terminated singly-linked list, starting with the "header.h" directory search chain, which then links into the <header.h> directory chain. Files included with the <foo.h> syntax start the lookup directly in the second half of this chain. However, files included with the "foo.h" syntax start at the beginning of the chain, but with one extra directory prepended. This is the directory of the current file; the one containing the #include directive. Prepending this directory on a per-file basis is handled by the function search_from. Note that a header included with a directory component, such as #include "mydir/foo.h" and opened as ‘/usr/local/include/mydir/foo.h’, will have the complete path minus the basename ‘foo.h’ as the current directory. Enough information is stored in the splay tree that CPP can immediately tell whether it can skip the header file because of the multiple include optimization, whether the file didn’t exist or couldn’t be opened for some reason, or whether the header was flagged not to be re-used, as it is with the obsolete #import directive. For the benefit of MS-DOS filesystems with an 8.3 filename limitation, CPP offers the ability to treat various include file names as aliases for the real header files with shorter names. The map from one to the other is found in a special file called ‘header.gcc’, stored in the command line (or system) include directories to which the mapping applies. This may be higher up the directory tree than the full path to the file minus the base name. # Concept Index **A** assertions ........................................ 9 **C** controlling macros ............................ 19 **E** escaped newlines ............................. 3 **F** files ............................................. 21 **G** guard macros ................................... 19 **H** hash table ...................................... 9 header files .................................... 1 **I** identifiers ..................................... 9 interface .................................... 1 **L** lexer ............................................. 3 line numbers ................................... 17 **M** macro expansion ............................... 11 macro representation (internal) ............. 11 macros ......................................... 9 multiple-inlude optimization .................. 19 **N** named operators .............................. 9 newlines ....................................... 3 **P** paste avoidance ................................ 15 **S** spacing ......................................... 15 **T** token run ...................................... 5 token spacing .................................. 15
{"Source-Url": "https://gcc.gnu.org/onlinedocs/cppinternals.pdf", "len_cl100k_base": 9625, "olmocr-version": "0.1.50", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 53786, "total-output-tokens": 10832, "length": "2e13", "weborganizer": {"__label__adult": 0.0002262592315673828, "__label__art_design": 0.00027251243591308594, "__label__crime_law": 0.00014984607696533203, "__label__education_jobs": 0.0001906156539916992, "__label__entertainment": 6.0558319091796875e-05, "__label__fashion_beauty": 6.860494613647461e-05, "__label__finance_business": 8.308887481689453e-05, "__label__food_dining": 0.0002092123031616211, "__label__games": 0.0004444122314453125, "__label__hardware": 0.0004649162292480469, "__label__health": 0.00011843442916870116, "__label__history": 0.00010669231414794922, "__label__home_hobbies": 4.565715789794922e-05, "__label__industrial": 0.00014638900756835938, "__label__literature": 0.0001366138458251953, "__label__politics": 0.00010877847671508788, "__label__religion": 0.0002646446228027344, "__label__science_tech": 0.0025177001953125, "__label__social_life": 5.286931991577149e-05, "__label__software": 0.00966644287109375, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.00013566017150878906, "__label__transportation": 0.0001634359359741211, "__label__travel": 0.0001398324966430664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46265, 0.00673]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46265, 0.36222]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46265, 0.89659]], "google_gemma-3-12b-it_contains_pii": [[0, 74, false], [74, 74, null], [74, 1380, null], [1380, 1380, null], [1380, 2216, null], [2216, 2216, null], [2216, 5456, null], [5456, 8895, null], [8895, 12427, null], [12427, 16267, null], [16267, 16532, null], [16532, 16532, null], [16532, 19254, null], [19254, 19254, null], [19254, 22655, null], [22655, 26027, null], [26027, 27927, null], [27927, 27927, null], [27927, 30862, null], [30862, 33466, null], [33466, 36251, null], [36251, 37256, null], [37256, 40265, null], [40265, 41751, null], [41751, 45034, null], [45034, 45034, null], [45034, 46265, null], [46265, 46265, null]], "google_gemma-3-12b-it_is_public_document": [[0, 74, true], [74, 74, null], [74, 1380, null], [1380, 1380, null], [1380, 2216, null], [2216, 2216, null], [2216, 5456, null], [5456, 8895, null], [8895, 12427, null], [12427, 16267, null], [16267, 16532, null], [16532, 16532, null], [16532, 19254, null], [19254, 19254, null], [19254, 22655, null], [22655, 26027, null], [26027, 27927, null], [27927, 27927, null], [27927, 30862, null], [30862, 33466, null], [33466, 36251, null], [36251, 37256, null], [37256, 40265, null], [40265, 41751, null], [41751, 45034, null], [45034, 45034, null], [45034, 46265, null], [46265, 46265, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46265, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46265, null]], "pdf_page_numbers": [[0, 74, 1], [74, 74, 2], [74, 1380, 3], [1380, 1380, 4], [1380, 2216, 5], [2216, 2216, 6], [2216, 5456, 7], [5456, 8895, 8], [8895, 12427, 9], [12427, 16267, 10], [16267, 16532, 11], [16532, 16532, 12], [16532, 19254, 13], [19254, 19254, 14], [19254, 22655, 15], [22655, 26027, 16], [26027, 27927, 17], [27927, 27927, 18], [27927, 30862, 19], [30862, 33466, 20], [33466, 36251, 21], [36251, 37256, 22], [37256, 40265, 23], [40265, 41751, 24], [41751, 45034, 25], [45034, 45034, 26], [45034, 46265, 27], [46265, 46265, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46265, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
cc00e8611b62c4f9cf10bcb38d477040503bd42b
Resilient Computing on ROS using Adaptive Fault Tolerance Michaël Lauer, Matthieu Amy, Jean-Charles Fabre, Matthieu Roy, William Excoffon, Miruna Stoicescu To cite this version: HAL Id: hal-01703968 https://hal.laas.fr/hal-01703968 Submitted on 16 Feb 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Resilient Computing on ROS using Adaptive Fault Tolerance Michaël Lauer¹ *, Matthieu Amy, Jean-Charles Fabre², Matthieu Roy, William Excoffon and Miruna Stoicescu³ LAAS-CNRS, Université de Toulouse, CNRS, ¹UPS, ²INP, Toulouse, France ³EUMETSAT, Darmstadt, Germany SUMMARY Computer-based systems are now expected to evolve during their service life in order to cope with changes of various nature, ranging from evolution of user needs, e.g., additional features requested by users, to system configuration changes, e.g., modifications in available hardware resources. When considering resilient embedded systems that must comply with stringent dependability requirements, the challenge is even greater, as evolution must not impair dependability attributes. Maintaining dependability properties when facing changes is, indeed, the exact definition of resilient computing. In this paper, we consider the evolution of systems with respect to their dependability mechanisms, and show how such mechanisms can evolve with the system evolution, in the case of ROS, the Robot Operating System. We provide a synthesis of the concepts required for resilient computing using a component-based approach. We particularly emphasize the process and the techniques needed in order to implement an adaptation layer for fault tolerance mechanisms. In the light of this analysis, we address the implementation of Adaptive Fault Tolerance (AFT) on ROS (Robot Operating System) in two steps: firstly, we provide an architecture to implement fault tolerance mechanisms in ROS, and secondly, we describe the actual adaptation of fault tolerance mechanisms in ROS. Beyond the implementation details given in the paper, we draw the lessons learned from this work and discuss the limits of this run-time support to implement AFT features in embedded systems. KEY WORDS: Adaptive fault tolerance; ROS; Resilience 1. INTRODUCTION Evolution during service life is very frequent in many systems nowadays, including dependable systems. Such an evolution leads to modifications of the system software and hardware configuration. A challenge for the dependability community is to develop systems that remain dependable when facing changes (new threats, change in failures modes, application updates). The persistence of dependability when facing changes —defining the resilience of the system [1]— encompasses several aspects, among which evolvability is a key concept. Handling evolution involves new development processes, such as agile development methods, but also run-time supports that enable modifications at run-time. At run-time, dependability relies on fault-tolerant computing, i.e., a collection of Fault Tolerance Mechanisms (FTMs) attached to the application according to its criticality level. In this context, one of the key challenges of resilient computing is the capacity to adapt the FTM attached to an application during its operational life. In resilient systems, faults lead to failure modes that may violate dependability properties. The role of the safety analysis (e.g., using FTA, Fault Tree Analysis, or FMECA, Failure Modes, Effects and Criticality Analysis) is to identify the failure mode, the fault model and then define the safety mechanisms to prevent the violation of safety properties. Such safety mechanisms rely on basic error detection and recovery mechanisms, namely fault tolerance techniques, that are based on Fault Tolerance Design Patterns (FTDP) that can be combined together. During the operational life of the system, several situations may occur. For example, new threats may lead to revise the fault model (electromagnetic perturbations, obsolescence of hardware components, software aging, etc.). A revision of the fault model has of course an impact on the fault tolerance mechanisms. In other words, the validity of the fault tolerance mechanisms or the safety mechanisms depends on the representativeness of the fault model. In a certain sense, a bad identification of the fault model may lead first, to pay for useless mechanisms in normal operation and, second to observe a very low coverage of erroneous situations. This has an obvious side effect on the dependability measures (reliability, dependability). A change in the definition of the fault model often implies a change in the fault tolerance mechanisms. Beyond the fault model, there are other sources of changes. Resources changes may also impair some safety mechanisms that rely on hardware resources. A typical example is the loss of processing units, but simply a loss in network bandwidth may invalidate some fault tolerance mechanisms from a timing viewpoint. Application changes are more and more frequent during the operational lifetime of a system. This is obvious for conventional applications (e.g., mobile phones) but it is becoming also needed for more critical embedded systems. Today, it is the case for long living systems like space or avionics systems, but also in the automotive domain, not only for maintenance purposes but also for commercial reasons. The notion of versioning (updates) or the loading of additional features (upgrades) may lead to change the assumptions on top of which the implementation of FT mechanisms rely. Such change implies revisiting the FMECA spreadsheets but also the implementation of the FT mechanisms. Some FT mechanisms rely on strong assumptions on the lower level behavior, and the importance of assumptions coverage [2] is known for decades in the dependability community. Whatever the system’s evolution during its whole lifetime, the safety mechanisms must remain consistent with all assumptions and operational conditions in terms of fault model, resources availability and application characteristics. Thus, the FT mechanisms must be adapted accordingly, leading to the notion of Adaptive Fault Tolerance (AFT). Contributions: This work provides the following three contributions: i) we describe a concise synthesis of concepts required by any Adaptive Fault Tolerant system. This synthesis is oriented to derive the required support from the underlying operating system or middleware, ii) we propose an architectural model to implement generic and composable FT mechanisms on ROS, in a way that their integration is transparent to application, a prerequisite to their dynamic adaptation iii) we analyze in details to what extent the adaptation at run-time of FT mechanisms in ROS is feasible, and discuss the cost involved by this adaptation. In a first part, we summarize our approach to implement Adaptive Fault Tolerance, enabling partial updates of FTM to be carried out on-line. We take advantage of Component Based Software Engineering technologies for implementing the adaptation of fault tolerance mechanisms. The minimal run-time support for implementing adaptive fault tolerance must provide one-to-one mapping of components to run-time units, segregation between components, and dynamic binding between components. In the second part, we analyze to what extent AFT can be implemented on ROS. ROS is presently used in many applications (robotics applications, automotive applications like ADAS Advanced Driver Assistance Systems, or military applications). We show how ideal components can be mapped to ROS components and give implementation details of adaptive composable FTM at run-time. We finally draw the lessons learned from our first experiments that rely on a small case study to identify the limits of ROS as a run-time support for Adaptive Fault Tolerance. We discuss the limits of the exercise and identify some promising directions for future work. In Section 2 we describe the motivations and the problem statement. We give in Section 3 our definition and understanding of resilient computing. Our Component-Based Software Engineering (CBSE) approach for adaptive fault tolerance is summarized in Section 4. A full account of this approach can be found in [3]. The mapping of this approach to ROS is described in Section 5 and in Section 6, with the latter focusing on dynamic adaptation. The lessons learned are given in Section 7 before concluding. 2. MOTIVATIONS AND PROBLEM STATEMENT The need for Adaptive Fault Tolerance (AFT) rising from the dynamically changing fault tolerance requirements and from the inefficiency of allocating a fixed amount of resources to FTM}s throughout the service life of a system was stated in [4]. AFT is gaining more importance with the increasing concern for lowering the amount of energy consumed by cyber-physical systems and the amount of heat they generate [5]. For Dependable systems that cannot be stopped for performing off-line adaptation, on-line adaptation of Fault Tolerance Mechanisms (FTMs) has attracted research efforts for some time now. However, most of the solutions [6, 7, 8] tackle adaptation in a preprogrammed manner: all FTMs necessary during the service life of the system must be known and deployed from the beginning and adaptation consists in choosing the appropriate execution branch or tuning some parameters, e.g., the number of replicas or the interval between state checkpoints. Nevertheless, predicting all events and threats that a system may encounter throughout its service life and making provisions for them is impossible. The use of FTMs in real operational conditions may lead to slight updates or unanticipated upgrades, e.g., compositions of FTMs that can tolerate a more complex fault model than initially expected. This explains why static system configurations with all possible FTMs and all possible combinations (FTMs composition) are not tractable. A form of differential FTM updates is proposed in this work to tackle unanticipated dependable systems evolution. In both aeronautical and automotive systems, the ability to perform remote changes for different purposes is essential: maintenance but also updates and upgrades of big embedded applications. The remote changes should be partial as it is unrealistic to reload completely a processing unit for small updates. This idea is recently promoted by some car manufacturers like Renault, BMW but also TESLA Motors in the USA stating in its website ”Model S regularly receives over-the-air software updates that add new features and functionality”. It is important to mention that performing remote changes will become very important for economic reasons, for instance selling options a posteriori since most of the evolution in the next future will rely on software for the same hardware configuration (sensors and actuators). Evolvability has long been a prerogative of the application business logic. A rich body of research exists in the field of software engineering consisting of concepts, tools, methodologies and best practices for designing and developing adaptive software [8]. Consequently, our approach for Adaptive Fault Tolerance leverages advancements in this field such as Component-Based Software Engineering [9], Service Component Architecture [10], and Aspect-Oriented Programming [11]. Functional and configuration changes may have a strong impact on dependability, and fault tolerance mechanisms must be updated to remain efficient in the presence of faults. To this aim, our basic idea is the following. Fault Tolerance or Safety Mechanisms are developed as a composition of elementary mechanisms, e.g., basic design patterns for fault tolerant computing. Using such concepts and technologies, we design FTMs as Lego-like brick-based assemblies that can be methodically modified at run-time through fine-grained changes affecting a limited number of bricks. This is the basic idea of our approach that maximizes reuse and flexibility, contrary to monolithic replacements of FTMs found in related work, e.g. [6, 7, 8]. However, most of software run-time supports used in embedded systems today do not rely on dynamic CBSE concepts. AUTOSAR, for instance, relies on very static system engineering concepts and does not provide today much flexibility [12]. A new approach enabling remote updates to be carried out, including for safety mechanisms, is required. To the best of our knowledge, componentization and dynamic configuration of fault tolerance mechanisms has not been addressed in previous works. ROS seems an appealing candidate for the dynamic composition of safety mechanisms. ROS is described as†: [...] an open-source, meta-operating system for your robot. It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management. It also provides tools and libraries for obtaining, building, writing, and running code across multiple computers. ROS can be viewed as a middle-ware running on top of a Unix-based operating system (typically Linux). ROS is used in robotics applications (e.g., Robonaut 2 from NASA within the International Space Station) but also in other industry sectors, the automotive industry for instance. This open-source middle-ware provides a weak component approach and means to dynamically manipulate the system configuration. 3. RESILIENT SYSTEM AND DESIGN PATTERNS 3.1. Basic principles and definitions A resilient system architecture is similar to a conventional dependable system architecture, but exhibits additional services, like an Adaptation Engine and a Monitoring Engine. Due to some changes in operation, an FTM may have to evolve and its development is carried out off-line. The Adaptation Engine objective is to update the implementation of the FTM on-line with necessary and sufficient only modifications to make it adequate. The Monitoring Engine controls that the running FTMs are consistent with their assumptions according to system state observation. Any inconsistency detected must trigger an adaptation of the FTM. Monitoring issues are out of the scope of the work reported in this paper. In our framework, an application component \( C \) is attached (bounded) to an FTM (possibly a composition of several FTMs) following the well-known Separation of Concerns (SoC) principle. The Adaptation Engine is thus responsible for the management of the dynamic link between \( C \) and FTM, but also between components within a composite FTM component. It keeps track of components assemblies for both the application and the FTM. Fault Tolerance Design Patterns represent solutions to a given fault tolerant computing problem. In Figure 1 we show an extract of FTDP classification with respect to fault models (F) and application characteristics (A). The fault model F has to be considered in a first place, distinguishing hardware and software faults here. Regarding hardware faults, patterns can deal with crash faults, permanent and transient value faults. In a second step, we refine the selected pattern, a duplex strategy for our example in Figure 1, with application characteristics regarding determinism and state issues. Determinism of execution implies that identical input values lead to identical output results, a key point for active replication. State, if any, also involves the capability to capture the state, which is required for passive and checkpointing-based applications. A Fault Tolerance Mechanism (FTM) is then an implementation of a selected pattern. This classification is obviously very incomplete, but its merit is to show how to select a given FTM. We rely on such criteria, more precisely assumptions, to illustrate adaptive fault tolerant computing. In the remainder of this paper, we will consider FTMs dealing with hardware faults only (permanent or transient). We recognize that software faults are more difficult to handle, both for detection and recovery, and mainly depend on application semantics. In our case, FTM handling hardware faults are sufficient to perform our analysis targeting ROS as a run-time support for adaptive fault tolerant computing. †http://wiki.ros.org/ROS/Introduction 3.2. FTM Selection Criteria As soon as the fault model is determined, then several solutions can be investigated depending on the application characteristics that have an impact on the implementation and the validity of the FTM. Depending on determinism and state issues, one implementation of the FTDP is chosen, leading thus to a concrete FTM. The resource aspect comes last. Any FTM needs resources to execute. Among the several FTMs that satisfy F and A assumptions, the selection can be based on local or system wide criteria. An FTM can be chosen because it requires the smallest set of resources among the valid FTMs candidates, or a more complex algorithm can be run to check if more resources can be granted to an FTM in order to improve some other criteria like response time in normal operation, recovery time, etc. The fault model can obviously be very much extended with more detailed types of faults including undesirable events identified in safety analysis. The mechanisms are identified according to the fault model, but their implementation depends very much on the application characteristics. The example given here shows the implication of state and determinism in the selection of a given implementation of duplex strategy. An extended definition of the fault model, including accidental physical faults both permanent and transient, programming faults, application undesirable events considered in safety analysis, may lead to the composition of several FTM. This issue is considered in this paper and illustrated in Section 5. The next Section focuses on describing the basic concepts that underly an adaptive fault tolerant system. 4. ADAPTIVE FAULT-TOLERANCE In this Section, we synthesize the essential concepts to address the problem of Adaptive Fault Tolerant computing. The extensive discussion is out of the scope of this paper and can be found in [3, 13, 14, 15]. 4.1. Basic concepts of AFT Three software development concepts are, in our view, essential for adaptive fault tolerance [13, 14]: - **Separation of Concerns:** this concept is now well known, it implies a clear separation between the functional code, i.e. the application, and the non-functional code, i.e. the fault tolerance mechanisms in our case. The connection between the application code and the FTM must be clearly defined as specific connections. This means that the FTMs can be disconnected and replaced by a new one provided the connectors remains the same. - **Componentization:** this concept means that any software components can be decomposed into smaller components. Each component exhibits interfaces (services provided) and receptacles (services required). This means that any FTMs can be decomposed into smaller pieces, and conversely that an FTM is the aggregation of smaller ones. The ability to manipulate the binding between components (off-line but also on-line) is of high interest for AFT. - **Design for adaptation:** the adaptation of software systems implies that i) the software itself has been analyzed with adaptation in mind for later evolution using componentization (although all situations cannot be anticipated), and ii) software systems have been designed to simplify adaptation including from a programming viewpoint (e.g., using object-oriented, aspect-oriented programming concepts). Such basic concepts have been established and validated through various steps of analysis of fault tolerance design patterns and after several design and implementation loops, as discussed in [3]. The main benefits of AFT with respect to pre-programmed adaptation is that it provides means to define and update dependability mechanisms later during the lifetime of the system. Pre-programmed adaptation implies that all possible undesirable situations are defined at design time, which is difficult to anticipate regarding new threats (attacks), new failure modes (obsolescence of components), or simply adverse situations that have been ignored or forgotten during the safety analysis. In short, fine grain adaptation of FTMs improves maintainability of the system from a non-functional viewpoint. 4.2. Change Model The choice of an appropriate fault tolerance mechanism (FTM) for a given application depends on the values of several parameters. We consider three classes of parameters: i) fault tolerance requirements (F); ii) application characteristics (A); iii) available resources (R). We denote (F, A, R) as change model. At any point in time, the FTM(s) attached to an application component must be consistent with the current values of (F, A, R). The three classes of parameters enable to discriminate FTMs. Among fault tolerance requirements F, we focus, for the time being, on the fault model that must be tolerated. Our fault model classification is based on well-known types [2], e.g., crash faults, value faults, development faults. In this work, we focus on hardware faults but the approach is perfectly reproducible for FTMs that target development faults. The application characteristics A that we identified as having an impact on the choice of an FTM is: application statefulness, state accessibility and determinism. We consider the FTMs are attached to a black-box application. This means there is no possibility to interfere with its internals, for tackling non-determinism, for instance, in case an FTM only works for deterministic applications. Resources R play an important part and represent the last step in the selection process. FTMs require resources such as bandwidth, CPU, battery life/energy. In case more than one solution exists, given the values of the parameters F and A, the resource criterion can invalidate some of the solutions. A cost function can be associated to each solution, based on R. Any parameter variation during the service life of the system may invalidate the initial FTM, thus requiring a transition towards a new one. Transitions may be triggered by new threats, resource loss or the introduction of a new application version that changes the initial application characteristics. A particularly interesting adaptation trigger is the fault model change. Incomplete or misunderstood initial fault tolerance requirements, environmental threats such as electromagnetic interference or hardware aging may change the initial model to a more complex one. 4.3. FT Design Patterns and Assumptions To illustrate our approach, we consider some fault tolerance design patterns and briefly discuss their underlying assumptions and resource needs (a full coverage of this point can be found in [15]). Any change that invalidates an assumption or implies an unacceptable resource change calls for an update of the FTMs. Duplex protocols tolerate crash faults using passive (e.g., Primary-Backup Replication denoted PBR), or active replication strategies (e.g., Leader-Follower Replication denoted LFR). In this case, each replica is considered as a self-checking component, the error detection coverage is perfect. The fault model includes hardware faults or random operating system faults (no common mode faults). At least 2 independent processing units are necessary to run this FTM. Two design patterns tolerating transient value faults are briefly discussed here. Time Redundancy (TR) tolerates transient physical faults or random run-time support faults using repetition of the computation and voting. This is a way to improve the self-checking nature of a replica, but it introduces a timing overhead. Assertion & Duplex (A&D) tolerates both transient and permanent faults. It’s a combination of a duplex strategy with the verification using assertions of safety properties that could be violated by a value fault or by a random run-time support error. Such assertions can be user-defined and used to parameterize the FTM. In a certain sense it is a hybrid mechanism, since its overall behavior is customized by application-dependent assertions. Other mechanisms fall in this category, like Recovery Blocks and N-Version programming. Adjudicators and multiple Versions are examples of user-defined software blocks used in these generic fault tolerance design patterns. In the work reported in this paper, we use simple implementations of a sub-set of FTMs (see Table I). More complex implementations have been proposed in other works, as described in [16]. <table> <thead> <tr> <th>Assumptions / FTM</th> <th>PBR</th> <th>LFR</th> <th>TR</th> <th>A&amp;D</th> </tr> </thead> <tbody> <tr> <td>Fault Model (F)</td> <td>Crash</td> <td>√</td> <td>√</td> <td>√</td> </tr> <tr> <td>Application Characteristics (A)</td> <td>Transient</td> <td>√</td> <td>√</td> <td>(√)</td> </tr> <tr> <td></td> <td>Deterministic</td> <td>√</td> <td>√</td> <td>(√)</td> </tr> <tr> <td>State Access</td> <td>√</td> <td>√</td> <td>(√)</td> <td></td> </tr> <tr> <td>Resources (R)</td> <td>Bandwidth</td> <td>high</td> <td>low</td> <td>nil (TDB)</td> </tr> <tr> <td># CPU</td> <td>2</td> <td>2</td> <td>1</td> <td>2</td> </tr> </tbody> </table> Table I. Assumptions and fault tolerance design patterns characteristics The underlying characteristics of the considered FTMs, in terms of (F, A, R), are shown in Table I. For instance, PBR and LFR tolerate the same fault model, but have different A assumptions and R needs. PBR allows non-determinism of applications execution because only the Primary computes client requests while LFR only works for deterministic applications as both replicas compute all requests. LFR could tackle non-determinism if the application was not considered a black-box, as in our approach. PBR requires state access for checkpoints and higher network bandwidth (in general), while LFR does not require state access but generally incurs higher CPU costs (and, consequently, higher energy consumption) as both replicas perform all computations. During the service life of the system, the values of the parameters enumerated in Figure 1 can change. An application can become non-deterministic because a new version is installed. The fault model can become more complex, e.g., from crash-only it can become crash and value fault due to hardware aging or physical perturbations. Available resources can also vary, e.g., bandwidth drop or constraints in energy consumption. For instance, the PBR→LFR transition is triggered by a change in application characteristics (e.g., inability to access application state) or in resources (bandwidth drop), while the PBR→A&D transition is triggered by a change in the considered fault model (e.g., safety property verification). Transitions can occur in both directions, according to parameter variation. The priority is the fault model, the selection of the solution (i.e., the composition of several FTMs) depending on the application characteristics and the available resources. The final objective is always to comply with the dependability properties during the service lifetime. 4.4. Design for adaptation of FTM Our design for adaptation aims at producing reusable elementary components that can be combined to implement a given fault tolerance or safety mechanism. Any FTM follows the generic Before-Proceed-After meta-model. Many FTMs can be mapped and combined using this model, as shown in Table II. <table> <thead> <tr> <th>FTM</th> <th>Before</th> <th>Proceed</th> <th>After</th> </tr> </thead> <tbody> <tr> <td>PBR</td> <td>primary</td> <td>Compute</td> <td>Checkpointing</td> </tr> <tr> <td></td> <td>backup</td> <td></td> <td>State update</td> </tr> <tr> <td>LFR</td> <td>leader</td> <td>Forward request</td> <td>Compute</td> </tr> <tr> <td></td> <td>follower</td> <td>Handle request</td> <td>Compute</td> </tr> <tr> <td>TR</td> <td>Save/Restore state</td> <td>Compute</td> <td>Compare</td> </tr> <tr> <td>A&amp;D</td> <td>Before</td> <td>Compute</td> <td>Assert</td> </tr> </tbody> </table> Table II. Generic execution scheme for FT design patterns Composition implies nesting the Before-Proceed-After meta-model. This approach improves flexibility, reusability, composability and reduces development time. Updates are minimized since just few components have to be changed. 4.5. Run-time support The software run-time support must provide key features to manipulate the component graph. Any application or an FTM is perceived as a graph of components. From previous experiments reported in [13], the following primitives are required: - Dynamic creation, deletion of components; - Suspension, activation of components; - Control over interactions between components for the creation and the removal of connections (bindings); Our first implementation was done on a reflective component-based middle-ware, FRASCATI [17], which features a scripting language to manipulate the component graph, FScript [18]. In the following section, we describe how fault tolerance mechanisms can be implemented in ROS [19] in a way that is transparent to applications. Then, in Section 6, we implement the above-described concepts in ROS. 5. ADDING FAULT-TOLERANCE TO ROS Said it concisely, ROS has not been designed to run safety critical systems, despite the fact that robots may be safety critical. Rather, the main goal of ROS is to allow the design of modular applications: a ROS application is a collection of programs, called nodes, interacting only through message passing. Developing an application in ROS involves describing an assembly of nodes, a process that is in line with the component-based architecture we described in the previous section. Such an assembly is referred to as the computation graph of the application. 5.1. Component model and reconfiguration Two communication models are available in ROS: a publisher/subscriber model and a client/server one. The publisher/subscriber model defines one-way, many-to-many, and asynchronous communications through the concept of topics. When a node publishes a message on a topic, it is delivered to every node that has subscribed to this topic. A publisher is not aware of the list of subscribers to its topics and does not know other publishers. The client/server model defines bidirectional transactions (one request/one reply) implemented as synchronous communications. through the concept of service. A node providing a service is not aware of the client nodes that may use its service. These high-level communication models ease the addition, substitution, or deletion of nodes in a transparent manner, be it offline or online. To provide this level of abstraction, each ROS application has to include a special node called the ROS Master. It provides registration and lookup services to the other nodes. All nodes register services and topics to the ROS Master. The master is the sole node that has a comprehensive view of the computation graph. When another node issues a service call, it queries the master for the address of the node providing the service, and then sends its request to this address. In order to be able to add fault tolerance mechanisms to an existing ROS application in the most transparent manner, we need to implement interceptors. An interceptor provides a means to insert a functionality, such as a monitoring node, in the invocation path between two ROS nodes. To this end, a relevant ROS feature is its remapping capability. At launch time, it is possible to reconfigure the name of any service or topic used by a node. Thus, requests and replies between nodes can be rerouted to interceptor nodes. 5.2. Implementing a componentized FT design pattern In this section, we first present the generic computation graph we use for FTMs on ROS; then the full implementation on ROS of a duplex FT design pattern, a Primary Backup Replication (PBR) combined with a Time-Redundancy (TR) design pattern is developed. 5.2.1. Generic Computation Graph We have identified a generic pattern for the computation graph of a FTM. Figure 2 shows its application in the context of ROS. Node Client uses a service provided by Server. The FTM computation graph is inserted between the two thanks to the ROS remapping feature. Since Client and Server must be re-launched for the remapping to take effect, the insertion is done offline. ![Figure 2. Generic computation graph for FTM](image) The FTM nodes, topics, and services are generic for every FTM discussed in section II. Implementing a FTM consists in specializing the Before, Proceed, and After nodes with the adequate behavior in the FTM. 5.2.2. Implementing PBR We illustrate the approach, through a Primary-Backup Replication (PBR) mechanism added to a Client/Server application in order to tolerate a crash fault of the Server. Figure 3 presents the associated architecture. Three machines are involved: the CLIENT site, which is hosting the Client node and the ROS Master, the MASTER site hosting the primary replica and the SLAVE site hosting the backup replica. For the sake of clarity, the symmetric topics and services between MASTER and SLAVE are not represented. Elements of the SLAVE are suffixed with "S". We present the behavior of each node, and the topics and services used through a request/reply exchange between a node Client and node Server (see Figure 3): - Client sends a request to Proxy (service clt2pxy); - Proxy adds an identifier to the request and transfers it to Protocol (topics pxy2pro); - Protocol checks whether it is a duplicate request: if so, it sends directly the stored reply to Proxy (topics pro2pxy). Otherwise, it sends the request to Before (service pro2bfr); - Before transfers the request for processing to Proceed (topics bfr2prd); no action is associated in the PBR case, but for other duplex protocol, Before may synchronize with the other replicas; - Proceed calls the actual service provided by Server (service prd2srv) and forwards the result to After (topics prd2aft); - After gets the last result from Proceed, captures Server state by calling the state management service provided by the Server (service aft2srv), and builds a checkpoint based on this information which it sends to node After_S of the other replica (topics aft2aft_S); - Protocol gets the result (topics aft2pro) and sends it to Proxy (topics pro2pxy); - On the backup replica, After_S transfers the last result to its protocol node Proto_S (topics aft2pr_S) and sets the state of its server to match the primary. In parallel with request processing, the node crash detector on the MASTER (noted CD) periodically gives a proof of life to the crash detector (CD_S) on the SLAVE to assert its liveness (topics CD2CD_S). If a crash is detected, then the crash detector of the slave notifies the recovery node (topics CD_S2rcy). This node has two purposes: (i) in order to enforce the fail-silent assumption, it must ensure that every node of the MASTER are removed; (ii) it switches the binding between the Client Proxy and the MASTER Protocol to the SLAVE Protocol. Thus, the SLAVE will receive the Client’s requests and will act as the Primary, changing its operating mode. **Figure 3. Computation graph of a PBR mechanism** ROS does not provide a command to change bindings between nodes after their initialization. The node developer must implement the transition logic. The SLAVE Protocol spins waiting for a notification from Recovery (topic rcy2pro_S). This is done using the ROS API: background threads, within a node, check for messages independently of the nodes main functionality. Upon reception of this topic, the SLAVE Protocol subscribes to topic pxy2pro and publishes to topic pro2pxy. After this transition, the proxy forwards the Clients requests to the SLAVE Protocol. 5.2.3. Impact on existing application From the designer viewpoint, there are two changes required to integrate a FTM computation graph to its application. First, Client will have to be remapped offline to call the proxy nodes service instead of directly the Server. Second, state management services, to get and set the state of the node, must be integrated to the Server. From an object-oriented viewpoint any server inherits from an abstract class stateManager providing two virtual methods, getState and setState, overridden during the server development. 5.3. Composition of FT mechanisms The generic computation graph for FTM is designed for composability. In this section, the composition scenario is two-fold. We first illustrate the composition of two FTM's, PBR for crash faults and TR for transient value faults. Initially the application was installed with PBR. From an operational standpoint, at a given point in time, transient faults impacting numerical calculations appeared due to hardware components aging or sudden increase of environmental radiations. In a second step, later on, we consider that the communication channel between client and server can be the target for intrusions. Cryptographic protocols, based for instance on a simple Public Key Infrastructure (PKI), can be used to cipher communications and add cryptographic signatures. With respect to request processing, a Protocol node and a Proceed node present the same interfaces: a request as input, a reply as output. Hence, a way to compose mechanisms is to substitute the Proceed node of a mechanism by a Protocol and its associated Before/Proceed/After nodes, as shown in Figure 4. Our approach enables developing a new mechanism on the foundation of several existing ones. This improves the development time and the assurance in the overall system, since all mechanisms have been validated off-line by test and fault injection techniques. ![Figure 4. Principle of composition for FT mechanisms](image) 5.3.1. Composition of PBR and TR The composition of PBR with TR can be triggered by a change in the fault model F. Let's suppose that, at a given point in time during the system lifetime, transient faults need to be tolerated because of hardware aging or due to some changes in the run-time environment, like electromagnetic perturbations. The architecture of the composite FTM made of PBR and TR is given in Figure 5. This figure is an extension of Figure 3 where the Proceed node of the PBR has been replaced with the Protocol node of the TR implementation. 5.3.2. Composing FTM's with Cryptographic protocols Suppose now that some passive attacks are considered in the fault model F, requiring thus the inclusion of some ciphering mechanisms, in addition to the crash and transient fault tolerance mechanisms. The generic computation graph presented in Figure 2 enables cryptographic protocols to be seamlessly added to an application, already equipped with accidental fault tolerance mechanisms, PBR and TR in our example. The cryptographic mechanism (called SEC for security) is located at both the client (SEC_C) and the server side (SEC_S) as shown in Figure 6. On the server side, SEC operates before PBR and TR. In this example, we only deal with possible intrusions between the client and the server. We assume that a node implements the Certification Authority (CA). Three topics are used to communicate with the CA, namely Cli2CA for the Client, Master2CA for the Master and Slave2CA for the Slave. The topic Cli2CA enables the Before node of the Client to collect the certificate of the Server. Similarly, the topic Master2CA and Slave2CA enable Before of the Master, respectively the Slave, to collect the certificate of the Client. We assume that all parties know CA’s public key. We assume that, for each participant, Client or Server, Before and After of the SEC mechanism share the pair of private and public keys they received when initialized. - Before of the Client can cipher the request with $K^{S}_{pub}$, the Server’s public key, and adds a signature, using $K^{C}_{priv}$ the Client’s private key; - Using the generic scheme given in Figure 6, a message is sent by the client to the server side through a new topic (called Client2Server) connecting Before of SEC C to Protocol of SEC S. - Before of the Master deciphers the request with \( R_{\text{priv}}^S \), the Server’s private key, and checks the signature, using \( R_{\text{pub}}^C \), the Client’s public key; - The Server can then proceed with a valid deciphered request through PBR and TR. Conversely, After of the Master ciphers the reply and computes a signature. After of the Client deciphers the reply, checks the signature, and finally delivers the reply to the Client. The communication between Master and Slave can also be secured using a similar security protocol. 6. DYNAMIC ADAPTATION: TO WHAT EXTENT WITH ROS 6.1. FTM Adaptation principles and ROS Dynamic adaptation requires remote loading and removal of individual elements of a software component architecture, and dynamic binding facilities to reorganize a graph of components. It also requires control features to suspend and activate individual components. To what extent ROS provides such features to safely adapt an FTM at runtime? We have considered three types of adaptations: i) updating the current FTM, for instance updating the inter-replica synchronization protocol, ii) switching from one FTM to another because some dramatic change occurred in the fault model, or because an application update leads to new application characteristics, and iii) composing two FTM, for instance because the fault model has been extended to consider other types of faults. We recall that the design, development, and validation of a new FTM configuration is performed off-line. The first type of adaptation implies a revision of the design or the implementation of the FTM. The other two are used to comply with parameters evolution (F, A or R). In all cases, the same features are required. Some are provided by ROS or by the underlying OS and some have been developed in-house. A set of minimal API required to guarantee the consistency of the transition between two different FTM has been established in previous work [13]: - Control over components life cycle at runtime (add, remove, start, stop). - Control over interactions between components at runtime, for creating or removing bindings. Furthermore, ensuring consistency before, during and after reconfiguration, requires that no requests or replies are lost: - Components have to be stopped in a quiescent state, i.e. when all internal processing has completed. - Incoming requests on stopped components must be buffered. ROS provides means to add and remove nodes, to buffer messages, and to control binding when a node is launched (using ROS remapping capability as presented in section 5). There is no ROS command to start or stop a node. Also ROS does not provide API to control the bindings of a node at runtime. However, the good news is that these APIs can be emulated with dedicated logic added to some nodes. For instance, this is what we use to control the bindings in the Primary-Backup Replication to switch to the Backup when the primary fails. To analyze to what extent run-time adaptation is possible with ROS, we need to describe in more details how topics work. Topics are the central concept in the publish/subscribe communication model used in ROS. A Topic is defined by: - A name: ports are connected through a named Topic. - Sending ports: used by publishers to send messages. - Receiving ports: used by subscribers to receive messages. - A data type: a unique data type is assigned to a topic for messages. In ROS, when a node wants to publish or subscribe to a topic, it uses methods provided by the NodeHandle. The NodeHandle is an object instantiated in each ROS node and serves as the main interface to interact with the ROS master from within a node. The NodeHandle manages the start and the shutdown of the node. It also manages the instantiation of the sending and receiving ports. Creating a publisher or a subscriber is done in the following manner: - The NodeHandle instantiation: ``` ros::NodeHandle nh ``` - **Publisher instantiation:** ``` ros::Publisher pub = nh.advertise<Data_type>("topic_name", queue_size) ``` - **Subscriber instantiation:** ``` ros::Subscriber sub = nh.subscribe("topic_name", queue_size, callback_function) ``` Publishers and Subscribers are ROS objects. The callback function is triggered by the reception of a message and includes the data type as an argument. ROS allows to remap the names of Topics a node uses, by substituting the name of the Topics hard coded in the node by new names provided as parameters of the command launching the node. Therefore, when a new node is launched, we are able to reconfigure the Topics of this node to communicate through any Topic matching the data type of its initial topic. Remapping arguments can be passed to any node and use the syntax `topic_name:=newname`. For example, a `Protocol` node which subscribed to a Topic named “pxy2pro” can be remapped at initialization to subscribe to an existing Topic "bfr2prd” by using two methods: - either using an XML script: ``` <node pkg="package name" type="node type" name="node name"/> <remap from="initial topic name" to="final topic name"/> ``` - or, with the user command line: ``` rosrune package node initialTopicName:=finalTopicName ``` With this ROS features we can launch and link nodes to the graph of component so we can adapt or compose FTMs. We illustrate adaptation through a composition example. Our FTMs architecture is designed for composability. With respect to request processing, a `Protocol` node and a `Proceed` node present the same interfaces: a request as input, a reply as output. Hence, a way to compose mechanisms is to substitute the `Proceed` node of a mechanism by a `Protocol` and its associated `Before/Proceed/After` nodes, as shown in Fig. 2 and Fig. 6. Since ROS does not provide services to manipulate a component graph at runtime, we have developed an `Adaptation Engine` node. Its purpose is to run a script controlling the adaptation of an FTM. For instance, the composition of a PBR with a TR mechanism goes through the following steps: - The Primary `Protocol` is suspended using the Unix signal SIGSTOP; - The `Proceed` node is killed using a ROS command: ``` rosnodele kill Primary/Proceed ``` - The TR nodes (`Protocol-B-P-A`) are launched (on each replicas) using a script in XML and a ROS command: `roslaunch TR TR.launch` - The TR `Protocol` links itself to the PBR `Before` topic and the PBR `After` one using the Topic names parameters provided in the `TR.launch` script; - The Primary `Protocol` is restarted using the Unix signal SIGCONT. Note that ROS ensures that messages are not lost during adaptation. A publisher node buffers all on-going messages until all its subscriber nodes read them. Thus stopping a node is safe with respect to communication. The other types of adaptation are based on a similar sequence of steps: suspend, substitute, link, and restart. For an update, only one node may be replaced. For a transition between two mechanisms only the `Before` and `After` nodes need to be changed. With the above described ROS/Unix features, we are able to compose or adapt our FTMs. However, we cannot dynamically adapt the communication between two nodes at run-time. The following section describes how we overcome this limitation. ### 6.2. Implementing Dynamic Binding on ROS Dynamic binding is the ability to configure on-line the communications between two nodes. It is an important feature for AFT in order to manipulate the graph of components. Indeed, we need to be able to manage the nodes but also the communications between them. Thus, the dynamic binding consists in being able to manage the communication of nodes. Dynamic binding is crucial to the proposed architecture of FTMs. A good example of dynamic binding usage is the transfer of the connection linking the Client to the Primary to the Backup when the Primary crashes (see section 5.2.2 – Implementing PBR). We cannot kill and relaunch the Backup nodes (loss of its internal state) therefore we cannot use the remapping at initialization. The Topic on which the Client publishes still exists after the crash of the Primary. We need to instantiate the communication ports of the Backup to communicate on this existing Topic. We have added to the node a function in order to control the instantiation of the communication ports. New data types for topics cannot be defined at run-time, however new topics based on pre-defined data types can be instantiated. For example, we have implemented a service (simplified in Fig. 7) to activate or deactivate the communication between the Backup and the Client at runtime. ```c bool recover(Request &req, Response &res) { // deactivation of the ports if (req.activation == 0) { pub.shutdown(); sub.shutdown(); } // instantiation of the ports to reconnect the node else if (req.activation == 1) { pub = nh.advertise<Data_type>(req.topic_pub, req.queue_size); sub = nh.subscribe(req.topic_sub, req.queue_size, callback); } return true } ``` Figure 7. Example of a dynamic binding service This service is triggered by an external node, in this example the Recovery Node. The input request must contain multiple parameters such as Topic name, publish or subscribe, activation or deactivation. We choose to use a service (synchronous message) to have an acknowledgment of the correct service execution. When the crash of the Primary is detected, the Recovery Node call the service implemented in the Backup and thus the connection to the Client is dynamically established, without using remapping at initialization. In Fig. 7 the function recover reinitializes the publisher or the subscriber to manage the dynamic binding with an external node. The function has two objectives: i) to shutdown a port (this function will use ROS API pub.shutdown() or sub.shutdown()) and ii) to initialize the port (this function will use ROS API advertise or subscribe presented in 6.1). In any case, an external node is mandatory to trigger the function and to pass to the node the various parameters it needs. In our example, we chose to use an existing Topic to bind the Client and the Backup. Thanks to this approach it is possible to create a totally new Topic between them. In summary our dynamic binding approach enables solving two situations: 1. Activation/shutdown of a Topic in an existing node (switch Primary/Back-Up) 2. The insertion of a node between two communicating ones (insertion of the FTM) In our prototype, AFT is realized through a combination of ROS features, Unix features, and some custom services. In particular, a nodes life cycle (stop, start) is controlled directly through UNIX signals. Dynamic binding is achieved through implementation of custom methods in the nodes and through external nodes, here the Adaptation Engine or the Recovery node, to orchestrate the adaptation. In conclusion, even if ROS lacks some essential features, AFT is possible with ROS. 7. LESSONS LEARNED SUMMARY The general requirements for an executive support suitable for implementing AFT we have exhibited in former work relies on the following features: i) control over component’s life cycle at run-time (add, remove, start, stop), ii) control over interactions at run-time for creating or removing bindings. In addition to separation of concerns, these features are related to the degree of observability and control the software platform provides over the component-based architecture of applications (including FTM) at run-time. Furthermore, to ensure consistency before, during and after reconfiguration of the component-based software architecture, several issues must be carefully considered: i) components must be stopped in a quiescent state, i.e., when all internal processing has finished, ii) incoming requests on stopped component must be buffered. This specification is our frame of reference to discuss the adequacy of ROS as a run-time support for AFT. In our experiments, a component was mapped to a node at run-time providing memory space segregation. The binding between components relied on topics managed by the ROS Master. Dynamic binding was possible but ROS does not provide a specific API to manage such connection between components. As we have seen in the previous section, additional code is required to manage dynamic bindings, using facilities provided by the underlying Linux operating system. Control over component’s life cycle: - ROS provides commands to delete and create nodes - Thanks to UNIX commands nodes can be stopped and restarted Control over interactions at run-time: - ROS enables nodes to connect or disconnect to/from topics - A specific service must be added to all nodes to trigger these connections/disconnections - The topics a node can connect or disconnect to/from is defined at the initialization of the node - ROS enables new topics to be created - Topics store outgoing messages when subscribers are not available Regarding the features of ROS for implementing AFT, we consider them not entirely satisfactory, as ROS does not provide dynamic binding between nodes, and the API to control components lifecycle at run-time is too weak. However, although imperfect, resilient computing using AFT can be implemented on ROS. Dynamic binding is possible on existing topics by adding some specific services in the nodes. For new topics, a customized solution was proposed in this work. ROS provides separation of concerns, since components can be mapped to nodes (Unix processes) that have their own address space. The model of communication used in ROS is also a benefit to design and implement resilient distributed application. It is worth noting that, as soon as some change is identified, the adaptation of the FTM attached to the application is carried out off-line and by the way validated according to the development process standard in the domain (automotive/ISO26262, aerospace/DO178C, IEC61508, etc.). The dynamic adaptation of the mechanisms is a service to avoid complete reload of the system. Using ROS in a dependable and resilient system is hindered by the fact that the ROS master is a single point of failure in the architecture. The ROS Master must be operational when installing an application and during its execution. When the ROS master fails, the whole software architecture must be restarted. We are currently investigating a replicated implementation of the ROS master using the DMTCP (Distributed MultiThreaded Checkpointing) library developed at NorthEastern University Boston [20]. This is however very complex and having multiple ROS masters running in parallel is currently not possible. For the time being our software architecture, as any ROS application, is linked to a unique ROS Master. This problem should be solved by the ROS community and this is something that should be addressed in ROS 2. Indeed, the next major revision of ROS (ROS2) is based on a DDS (Data Distribution Service) communication system that should help solving this problem by distributing the ROS master functionalities among the nodes of the system. This approach would however require reliable multicast protocols properly implemented and validated. 8. CONCLUSION The adaptation of embedded application requires an adequate run-time support. Beyond design for adaptation issues that relate more to the development process, the run-time support must fulfill 5 requirements: (i) separation of concerns, (ii) componentization, (iii) component mapping to tasks. The last 2 criteria relate to the dynamic adaptation of the software on-line: (iv) dynamic binding and (v) control over components. ROS enables the 3 first requirements to be satisfied, but fails to provides efficient solution for the last two. On-line adaptation is possible as demonstrated in this paper. We have been able to overcome the limitations of ROS thanks to underlying OS features and some additional logic implemented into the nodes. As a run-time support for resilient computing, ROS is an interesting development platform to test concepts for Adaptive Fault Tolerance. The mapping of components to ROS is simple (component \(\rightarrow\) node) and on-line modification of FTMs during the lifetime of the system is possible. The insights gained with this work should help to develop a suitable run-time support for Adaptive Fault Tolerance in the context of safety critical real-time systems. Our current work is done in collaboration with Renault-Nissan Group, especially targeting remote updates for ADAS. The basic principles of our approach are consistent with the framework proposed in Adaptive AUTOSAR. The basic operating system is based on Posix and several services are defined to master adaptation, like the Software Configuration Management service and the Platform Health Management, which can be related to our Adaptation Engine. We believe that a run-time support like Linux or any Posix-based OS is not dynamic enough to implement fine-grained adaptation. ROS is providing an additional layer to this aim as a middle-ware, but the granularity remains coarse and the dynamic binding is still difficult to handle. We hope that ROS2 will provide a more powerful and reliable platform, using DDS (Data Distribution Service) for industrial applications. Solving the dynamic binding problem implies revisiting the publish-subscribe implementation to manipulate communication channels at run-time. Finally, the middle-ware should provide additional features to suspend, activate run-time entities, save/restore internal state and buffer inter-entities communications. In the next future, we plan to address those issues, taking advantage of the development of the Adaptive AUTOSAR Platform. REFERENCES
{"Source-Url": "https://hal.laas.fr/hal-01703968/document", "len_cl100k_base": 11918, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 52909, "total-output-tokens": 14254, "length": "2e13", "weborganizer": {"__label__adult": 0.00034046173095703125, "__label__art_design": 0.0004649162292480469, "__label__crime_law": 0.00033211708068847656, "__label__education_jobs": 0.0006394386291503906, "__label__entertainment": 8.827447891235352e-05, "__label__fashion_beauty": 0.00019109249114990232, "__label__finance_business": 0.0003528594970703125, "__label__food_dining": 0.0003299713134765625, "__label__games": 0.0007138252258300781, "__label__hardware": 0.0020008087158203125, "__label__health": 0.000560760498046875, "__label__history": 0.00036525726318359375, "__label__home_hobbies": 9.92417335510254e-05, "__label__industrial": 0.0006060600280761719, "__label__literature": 0.0003032684326171875, "__label__politics": 0.00031685829162597656, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.09185791015625, "__label__social_life": 7.2479248046875e-05, "__label__software": 0.0100860595703125, "__label__software_dev": 0.888671875, "__label__sports_fitness": 0.00026416778564453125, "__label__transportation": 0.0007929801940917969, "__label__travel": 0.0002312660217285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62823, 0.01775]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62823, 0.4249]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62823, 0.90226]], "google_gemma-3-12b-it_contains_pii": [[0, 1074, false], [1074, 3876, null], [3876, 8433, null], [8433, 13020, null], [13020, 17078, null], [17078, 19309, null], [19309, 23663, null], [23663, 27736, null], [27736, 30838, null], [30838, 33963, null], [33963, 36812, null], [36812, 40377, null], [40377, 41115, null], [41115, 44525, null], [44525, 48127, null], [48127, 51614, null], [51614, 55857, null], [55857, 60747, null], [60747, 62823, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1074, true], [1074, 3876, null], [3876, 8433, null], [8433, 13020, null], [13020, 17078, null], [17078, 19309, null], [19309, 23663, null], [23663, 27736, null], [27736, 30838, null], [30838, 33963, null], [33963, 36812, null], [36812, 40377, null], [40377, 41115, null], [41115, 44525, null], [44525, 48127, null], [48127, 51614, null], [51614, 55857, null], [55857, 60747, null], [60747, 62823, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62823, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62823, null]], "pdf_page_numbers": [[0, 1074, 1], [1074, 3876, 2], [3876, 8433, 3], [8433, 13020, 4], [13020, 17078, 5], [17078, 19309, 6], [19309, 23663, 7], [23663, 27736, 8], [27736, 30838, 9], [30838, 33963, 10], [33963, 36812, 11], [36812, 40377, 12], [40377, 41115, 13], [41115, 44525, 14], [44525, 48127, 15], [48127, 51614, 16], [51614, 55857, 17], [55857, 60747, 18], [60747, 62823, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62823, 0.05797]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
aaa62b7c2c9a59a51edafeb80b988604eb8e1e42
Loose Ends: A Mixed-Initiative Creative Interface for Playful Storytelling Max Kreminski\textsuperscript{1}, Melanie Dickinson\textsuperscript{2}, Noah Wardrip-Fruin\textsuperscript{1}, Michael Mateas\textsuperscript{1} \textsuperscript{1}University of California, Santa Cruz, \textsuperscript{2}Independent \{mkremins, nwardrip, mmateas\}@ucsc.edu, meldckn@gmail.com Abstract We present Loose Ends, a mixed-initiative co-creative storytelling play experience in which a human player and an AI system work together to compose a story. Loose Ends specifically aims to provide computational support for managing multiple parallel plot threads and bringing these threads to satisfying conclusions—something that has proven difficult in past attempts to facilitate playful mixed-initiative storytelling. We describe the overall human-AI interaction loop in Loose Ends, including the implementation of the rules-based AI system that enables this interaction loop; discuss four examples of desirable mixed-initiative interactions that are possible in Loose Ends, but not in similar systems; and present results from a preliminary expert evaluation of Loose Ends. Altogether, we find that Loose Ends shows promise for creating a sense of coauthorship in the player while also mitigating the directionlessness reported by players of earlier systems. Introduction Mixed-initiative creative interfaces (MICIs) (Deterding et al. 2017; Liapis et al. 2016) aim to support a human user’s creativity by providing them with an artificially intelligent creative partner. In the domain of storytelling-oriented creative writing, most existing MICIs function by providing suggestions as to how a story might be continued, thereby injecting unexpectedness into the writing process (Calderwood et al. 2020) and providing an immediate answer to the question of “What happens next?” when the user would otherwise become creatively stuck (Kreminski and Martens 2022). These existing MICIs have shown promise in several ways. In particular, MICIs that function by providing short-term story continuations have proven effective at suggesting viable next steps for a story (Roemmele and Gordon 2015); taking the story in unexpected directions (Kreminski et al. 2020a; Calderwood et al. 2020; Singh et al. 2022); and creating a sense of shared authorship (Samuel 2016) between the user and system (Kreminski et al. 2020a; Calderwood et al. 2020; Singh et al. 2022). However, these existing MICIs also exhibit several recurring problems. Most prominently, because the continuations these systems provide take only local context into account, they have a tendency to pull the story in unwanted directions (Roemmele and Gordon 2015; Calderwood et al. 2020; Singh et al. 2022) or to otherwise create a sense of long-term directionlessness (Kreminski et al. 2020a) that inhibits the development of coherent high-level story structure. To address these problems, we created Loose Ends, a MICI for storytelling that aims to support the development of coherent longer-term story structure. By explicitly reasoning about multiple parallel plot threads and providing a mixed-initiative interface for managing long-term storytelling goals framed in terms of these plot threads, Loose Ends aims to provide suggestions that keep the story on track with respect to the development of character arcs, conflicts, and high-level narrative themes. Our main contributions are: - A co-creative AI system that can reason about threaded plot structure in relation to high-level storytelling goals, proactively suggest new goals based on past plot events, and suggest character actions that advance these goals - An approachable user interface for interacting with this AI system to create stories - A preliminary evaluation of our approach by five experts in computationally engaged storytelling, indicating that Loose Ends shows promise at mitigating directionlessness while preserving a sense of coauthorship In addition to these contributions, we also make the current version of Loose Ends available to be played in a web browser\textsuperscript{1} and release its codebase as open source\textsuperscript{2}. Background Loose Ends draws inspiration from several past attempts to facilitate playful mixed-initiative storytelling, particularly Writing Buddy (Samuel, Mateas, and Wardrip-Fruin 2016) and Why Are We Like This? (Kreminski et al. 2020a,b). Both of these systems allow players to specify storytelling goals that guide the direction of the running story by influencing what story continuations the system will suggest. Both systems generate continuation suggestions in the form of structured plot events rather than prose, using a rules-based \textsuperscript{1}https://itsprobablyfine.github.io/LooseEnds \textsuperscript{2}https://github.com/ItsProbablyFine/LooseEnds Figure 1: The Loose Ends user interface. The **Who is involved?** section displays basic information about a generated cast of five characters. The **What has happened?** section lists plot events that have taken place in the story so far, along with player-written text giving more details about these events. The **What happens next?** section shows AI-generated suggestions for what might happen next in the story. The **Where are we going?** section shows active storytelling goals, including transparent goals that have been suggested by the AI system rather than added by the player. One action suggestion (highlighted in orange in the bottom left) is being hovered over by the player; consequently, the impact this suggestion would have on the active storytelling goals if accepted (i.e., advancement of the majorWork goal) is also highlighted in orange on the right. AI system rather than a language model to generate goal-relevant continuations. And both systems provide a story transcript that captures all past plot events in the form of a story outline, alongside player-written narration elaborating on the basic event descriptions generated by the system. Loose Ends follows a similar architecture, although it differs from its predecessors in two key ways. First, its storytelling goals are more sophisticated than those in either predecessor system. Unlike in *Why Are We Like This?*, storytelling goals in Loose Ends specify sequences of events that must be added to the story for the goal to be satisfied (rather than individual events alone)—and unlike in *Writing Buddy*, storytelling goals in Loose Ends can be parametrized with specific characters and additional constraints. Second, the AI in Loose Ends is capable of suggesting new storytelling goals that are consistent with the story up until this point, rather than just steering action suggestions toward player-specified goals as in previous systems. Together, these changes result in a system that feels like an active writing partner while also guiding player-authored stories toward coherent longer-term structure. Beyond plot event-based systems such as *Writing Buddy* and *Why Are We Like This?*, a number of attempts have also been made to facilitate mixed-initiative storytelling by providing continuation suggestions in the form of unstructured prose. Early examples of this approach can be found in the Say Anything (*Swanson and Gordon 2012*) and Creative Help (*Roemmele and Gordon 2015*) systems, which use case-based reasoning to find sentences similar to the user’s most recently typed sentence in a large database of preauthored stories, then suggest these sentences as continuations. More recently, textual continuations provided by language models have been used to support storytelling in a relatively unmediated way (*Manjavacas et al. 2017*; *Calderwood et al. 2020*). *Singh et al. (2022)* finetune a large language model on a storytelling-relevant dataset and extend its completion suggestions to include images and sound as well as text, then evaluate this approach at scale. In each of these cases, purely text-based completions have been found to be pleasantly surprising and often relevant to the immediately previous parts of the story being told, but divergent from user-intended story structure in ways that require frequent revision by the user to maintain long-term direction. One recent mixed-initiative storytelling support system that departs from the interaction paradigm of local continuation suggestion is TaleBrush (*Chung et al. 2022*), which instead aims to give users direct control of high-level story structure via the sketching of a visual fortune arc for the story’s main character. This approach has so far only been used to generate very short stories (on the order of five sentences long), and the coherence of the generated stories is limited, but this potential alternative means of specifying high-level storytelling goals still merits mention here. Another approach is that taken by Mimisbrunnur (*Stefnisson and Thue 2018*), which focuses on helping users create abstract story outlines that function as *generators* of stories rather than as the detailed backbone of a single story. **System Description** Loose Ends (Figure 1) is a mixed-initiative creative interface for playful storytelling. We specifically conceive of Loose... Ends as an AI-based narrative instrument (Kreminski and Mateas 2021): a system that can be played to produce narrative, in much the same way that a musical instrument can be played to produce music. The envisioned players of Loose Ends are the types of players who write retellings (Elad-hari 2018) of their play experiences in games—especially emergent narrative games with sizable storytelling-oriented player communities, such as The Sims and Dwarf Fortress. In the Loose Ends interaction loop, a human player repeatedly selects action suggestions furnished by the underlying AI system to continue the plot of a running story, using storytelling goals to steer the narrative toward player-desired long-term outcomes. Actions selected by the player are added to a running story transcript, and each action can be annotated with additional text by the player—for instance to narrate the action in greater detail. Although Loose Ends as a system aims to be storyworld-agnostic, the version of Loose Ends presented here contains actions and storytelling goals that are specifically relevant to constructing stories about the development of character relationships and careers within a small community of artists. In the future, we envision that many different “playsets” for Loose Ends might be created, supporting the construction of stories set in many different kinds of storyworlds. The AI system that powers Loose Ends consists of two major components. First is a storytelling goals tracker that updates a pool of active and possible storytelling goals as new plot events are added. Second is an action suggestion generator that generates and ranks potential suggestions for the next plot event in the story based on the currently active storytelling goals. Storytelling Goals Tracker Storytelling goals in Loose Ends are used to set and maintain the high-level direction of the story. Every goal is an instance of a goal template: a story sifting pattern written in the domain-specific logic programming language Winnow (Kreminski, Dickinson, and Mateas 2021). An example goal template is given in Appendix A. A goal template describes a sequence of interrelated events that can be interpreted as satisfying a particular storytelling purpose or instantiating a particular kind of plot thread. For instance, the current version of Loose Ends includes templates for goals that introduce or develop character relationships (e.g., friendship or rivalry); internal conflicts (e.g., artistic or career struggles); and high-level narrative themes (e.g., moral themes related to the virtues of persistence in the face of adversity). There were 12 goal templates total in the version of Loose Ends evaluated here. A goal is a partial match against a goal template, representing a sequence of past plot events that partially meet the goal template’s requirements. To advance a goal is to locate and accept an action suggestion that continues the sequence of events that match the underlying goal template. For instance, if a majorWork goal involving the character Aidan has been advanced past the first event (in which the goal’s main protagonist character begins work on a major art project) and a second event in which Aidan makes progress on the project is added to the story, this goal will be advanced another step. Goals can also be cut off if an event that violates one of the goal’s constraints is added to the story. For instance, if an onARoll goal involving the character Bella is active, but another character completes a major artwork before Bella manages to complete two major artworks in a row, this goal will be cut off, since a condition of the onARoll goal has now been violated. Goals are parametrized by the characters that are involved in them, and multiple goals that are based on the same underlying goal template can be active concurrently as long as they pertain to a different configuration of characters. For instance, two formGrudge goals can be simultaneously active if either the character that holds the grudge, the target of the grudge, or both are different between the two goals. Additionally, if the player knows that they want a certain type of plot thread to be present in the story but does not know which characters they want to be involved, they can add a storytelling goal of the relevant type without any character parameters specified and allow the system to suggest possible ways of casting the available characters into this thread. The Loose Ends user interface permits players to add goals manually (by selecting a goal template to instantiate as a goal, from a library of all available goal templates) and to remove goals that have already been established at any time. In addition, the AI system in Loose Ends constantly tracks and evaluates a pool of partial matches that the player has not established as goals. If one of these partial matches advances beyond a certain threshold (33% completion in the current version of Loose Ends), the system will automatically promote it to an active goal, rendered in a transparent style to indicate that this is a system-suggested goal rather than a player-added one. These goals can be removed by the player like any other (enabling the player to veto the system’s suggestions of additional storytelling goals), or the player can click on them to remove the transparency effect and notionally “lock them in” as player-intended goals. Action Suggestion Generator Action suggestions in Loose Ends are drawn from two pools of actions. The basic actions pool contains actions that are possible for any character at any time, regardless of social state, and remains fixed at all times. The dynamic actions pool is recalculated whenever a new event is added to the story, and contains actions that are only possible because of active storytelling goals that are in an appropriate state. For instance, when a complete establishGrudge goal between the characters Cam and Devin is active, the dynamic actions pool will contain actions that Cam can only take toward Devin because of their active grudge on Devin (such as sabotaging Devin’s most recent artwork). There were 32 action types total in the version of Loose Ends evaluated here: 20 basic actions and 12 dynamic actions. Actions in general may be either solo (involving only a single character, the actor who takes the action) or dyadic (involving two characters, the actor who takes the action and the target toward whom the action is directed). Creating a minor artwork, for instance, is a solo action, while insulting another character is a dyadic action. In addition, every action has an event type uniquely identifying the type of action that was performed and a list of zero or more tags that assign the action to high-level categories (such as release for actions in which the actor finishes and releases an artwork, friendly for actions in which the actor is friendly toward the target, and harms for actions that harm the target). Action suggestions are recalculated every time the set of active storytelling goals changes. When calculating action suggestions, the action suggestion generator first iterates over all possible next actions (in both the basic and dynamic action pools) and determines, for each action, which storytelling goals would be impacted (either advanced or cut off) by the addition of this action to the story. Each action is then given a priority score, which is the sum of three factors: - The number of active storytelling goals that this action would advance - A constant factor (0.5) if this action is from the dynamic actions pool—i.e., if it is only possible because of an active storytelling goal - A random factor (between 0 and 0.5) to randomly permute the priority of actions with the same base score Actions are sorted by their score and displayed in order, with the three highest-scoring actions being pulled to the top of the action suggestions list. In this way, actions that relate most strongly to the active storytelling goals are prioritized for display, with randomness ensuring a degree of alternation between suggestions that advance parallel plot threads. When the user hovers over an action suggestion to consider it, the precalculated information about which storytelling goals this action would advance or cut off is used to display the ramifications of accepting this action in the storytelling goals pane on the right side of the user interface. Interaction Examples In conjunction, the Loose Ends AI and user interface permit several desirable interactions that are not possible in other mixed-initiative creative interfaces for storytelling. Four especially interesting examples of novel mixed-initiative interactions enabled by Loose Ends (all of which took place organically during evaluation) are presented below. Discovering New Storytelling Goals Beyond simply suggesting action-level continuations to a running story in accordance with player-provided storytelling goals, Loose Ends can also infer new storytelling goals that are consistent with the story so far and proactively suggest these goals to the player. This often results in interactions where a player who would otherwise become uncertain of what to do next is inspired by, and begins pursuing, a system-discovered storytelling goal instead. For instance, in Figure 2, the player has just completed two establishGrudge goals targeting the same character (Cam) have both been completed. At this point, Loose Ends automatically discovers and surfaces a successive character relationship development goal, in which Aidan and Bella (who both have grudges on Cam) bond over their shared dislike. The first two steps of this goal are already complete, because the system has been tracking the possibility of surfacing this goal in the background, but it has only just now progressed far enough to be displayed. Discovering Thematic Conflicts Loose Ends can make it apparent when a conflict has arisen between two active storytelling goals. For instance, in Figure 3, the player is simultaneously working toward two distinct thematic goals for the story and considering an action that will reward Emily with career success after she completes a major artwork. This would support the theme that persistent work on a single major project leads to success (slowAndSteady) but undermine the competing theme that the way to success is to create a rapid succession of more minor artworks (quantityOverQuality). When the impact of the considered action on all active author goals is visualized, the conflict between these goals is revealed to the player. Resurfacing Dormant Plot Threads Because Loose Ends can maintain a larger set of active storytelling goals than the player can hold in their head all at once, action suggestions can serve to remind players of incomplete plot threads that they would otherwise forget to revisit. For instance, long-term storytelling goals like the tryTryAgain thematic goal (which requires a single character to repeatedly release artworks that are poorly received, before finally releasing one that is well-received) may temporarily fade into the background as the player focuses on another sub- plot that weaves together a few distinct storytelling goals at once—but once this more pressing subplot is complete, actions advancing the earlier thematic goal will again rise to the top of the action suggestions pool, reminding the player to return to the previously initiated thread. This interaction pattern particularly helps to facilitate narrative reincorporation as discussed by Tomaszewski (2011). Interleaving Parallel Plot Threads When multiple parallel plot threads are active and none of these threads has storytelling priority, the slight random permutation of equally ranked action suggestions means that Loose Ends by default tends to promote actions that alternately advance different threads. This can help players escape fixation (Gero 2011), in which they develop a narrow and premature focus on one plot thread or set of characters and forget about the possibility of developing others. Evaluation Procedure Since Loose Ends is still under active development, we conducted a preliminary and formative expert evaluation, intended to give us an initial sense of whether we have made qualitative progress toward our user experience goals. This evaluation was modeled on the evaluation of Germinate (Kreminski et al. 2020c), an earlier mixed-initiative co-creative system published at AIIDE. We recruited five expert evaluators, all acquaintances of the first author, and all of whom are both experienced creative writers and researcher-practitioners in intelligent narrative technologies. Four of these evaluators (an assistant professor of computer science, two game industry narrative designers, and one independent creator of narrative games) hold a PhD in a relevant area, while the other (a PhD student in computational media) holds multiple relevant graduate degrees. All evaluators had past experience with mixed-initiative storytelling in general, and none had encountered Loose Ends before. Because our evaluators were familiar with the state of the art in mixed-initiative storytelling, they were readily able to compare Loose Ends to similar systems and judge what it does well or poorly in comparison. Each evaluator participated in a single remote play session via Zoom. Each session was approximately one hour long and began with a brief (approximately 5-minute) introduction to the Loose Ends interface by one of the researchers. Subsequently, the evaluator constructed a single story using the Loose Ends interface while thinking aloud and sharing their screen. Once the story was complete, one of the researchers asked several unstructured interview questions to prompt reflection on play patterns they observed during the session. Both the think-out-loud and interview portions of the playtest sessions were recorded for later analysis. Finally, evaluators were administered a brief user experience questionnaire consisting of the following questions: Q1. What is your overall impression of the system? Q2. How easy was it to use the system? Q3. Were you able to use it without unnecessary effort? Q4. Did you feel a sense of control over the story? Q5. Was the system fun to use? Q6. Did you feel a sense of ownership of the story? Q7. Were you curious to see what would happen next in the story? Q8. Did you generally know what direction you wanted the story to go next? Q1 was open-ended and qualitative, while Q2-Q8 were quantitative, with responses ranging from 1-5 (where 5 indicates the highest level of agreement with the premise of the question). Q1-Q5 were adapted directly from the Germinate expert evaluation questionnaire (Kreminski et al. 2020c), while Q6-Q8 were intended to elicit reflection on aspects of the co-creative storytelling experience that were frequently mentioned by playtesters of Why Are We Like This? (Kreminski et al. 2020a). A summary of evaluator responses to the quantitative questions is given in Table 1. <table> <thead> <tr> <th>Question</th> <th>E1</th> <th>E2</th> <th>E3</th> <th>E4</th> <th>E5</th> <th>Avg</th> </tr> </thead> <tbody> <tr> <td>Q2. Usability</td> <td>4</td> <td>4</td> <td>5</td> <td>4</td> <td>4</td> <td>4.2</td> </tr> <tr> <td>Q3. Effortlessness</td> <td>4</td> <td>5</td> <td>5</td> <td>5</td> <td>5</td> <td>4.8</td> </tr> <tr> <td>Q4. Control</td> <td>4</td> <td>4</td> <td>4</td> <td>3</td> <td>3</td> <td>3.8</td> </tr> <tr> <td>Q5. Fun</td> <td>4</td> <td>5</td> <td>4</td> <td>4</td> <td>4</td> <td>4.2</td> </tr> <tr> <td>Q6. Ownership</td> <td>3</td> <td>4</td> <td>3</td> <td>3</td> <td>3</td> <td>3.2</td> </tr> <tr> <td>Q7. Curiosity</td> <td>4</td> <td>5</td> <td>4</td> <td>3</td> <td>4</td> <td>4.0</td> </tr> <tr> <td>Q8. Direction</td> <td>4</td> <td>5</td> <td>4</td> <td>3</td> <td>4</td> <td>4.0</td> </tr> </tbody> </table> Table 1: Summary of evaluators’ responses to quantitative survey questions. All responses were given on a numeric scale from 1-5, where 5 is highest agreement. Evaluation Results Directionlessness Is Mitigated. Our central design goal for Loose Ends was to mitigate the sense of high-level directionlessness reported by players during playtesting of Why Are We Like This? (Kreminski et al. 2020a) and assist in the development of stories that contain satisfying high-level structure. Both quantitative and qualitative evaluation responses suggest that Loose Ends successfully supports the development and maintenance of high-level narrative direction from the player's perspective. Quantitative survey responses related to sense of storytelling direction (Q8) ranged from 3-5, indicating that all evaluators had a sense of where they wanted the story next to go at a majority of points during the storytelling process. Additionally, all but one evaluator (E5) reported a score of 4 or higher in this category. Qualitative think-out-loud remarks and interview responses are consistent with these quantitative results. In particular, two evaluators (E4 and E5) remarked unprompted on how they never experienced writer’s block or a sense of being stuck during the play process. Additionally, no evaluators explicitly reported a sense of aimlessness or insufficient medium-term direction at any point during their playthrough. This stands in stark contrast to the prevalence of self-reported aimlessness during playtesting of Why Are We Like This?, wherein four of five playtesters reported a sense of directionlessness at least once during play (Kreminski et al. 2020a). Coauthorship Is Preserved. One open question for Loose Ends was whether the AI system could successfully preserve the sense of shared authorship that players experience in Why Are We Like This? while intervening more proactively in the storytelling process—including through the suggestion of new high-level storytelling goals. Both quantitative and qualitative evaluation responses suggest that Loose Ends succeeds in this regard. Quantitative responses regarding sense of control over the story (Q4), sense of ownership of the story (Q6), and sense of curiosity regarding what would happen next in the story (Q7) are especially salient here. For control, all evaluators reported a score of at least 3 (indicating a moderate sense of control), and all but one (E5) reported a score of 4 (indicating a strong, but not complete, sense of control). For ownership, all evaluators reported a score of at least 3 (indicating a moderate sense of ownership), and one (E2) reported a score of 4 (indicating a strong, but not complete, sense of ownership). For curiosity, scores were distributed across the 3-5 range, indicating that all evaluators felt at least moderate curiosity, while all but one (E4) experienced either strong or very strong curiosity regarding the story’s next direction. Taken together, these scores suggest that evaluators generally remained in control of the story while working with the system, but that they also created stories containing unexpected twists that they would be unlikely to invent if writing alone—to the extent that the AI system seemed to hold partial ownership of the stories that emerged. Qualitative think-out-loud remarks and interview responses further support this interpretation. One evaluator (E5) felt that the play process reflected “a nice meeting in the middle” between player-led and system-led storytelling; another (E1) remarked that it “feels like the sweet spot for co-creativity”; and a third (E2) felt it to be a “good collaboration”; “kind of the dream” for mixed-initiative co-creativity. Goal Alignment Is Unexpected and Fun. Four evaluators (E2-E5) remarked unprompted on how much they enjoyed it when the system correctly anticipated where they wanted the story to go next and offered options (especially storytelling goal suggestions) for continuing the story in a relevant direction. One evaluator (E2) was particularly pleasantly surprised by how often this took place during play. This suggests that the feeling of being seen or understood by the system can be a significant source of enjoyment during mixed-initiative storytelling, perhaps related to the aesthetic of responsiveness as described by Mason (2021). Evaluators Found Loose Ends Easy to Use. Loose Ends was rated highly by evaluators on usability and (especially) lack of unnecessary effort involved in use, suggesting that it is considered highly usable in comparison to similar systems with which these evaluators were familiar. All evaluators reported a score of at least 4 for both Q2 (usability) and Q3 (effortlessness), and all but one evaluator (E1) reported a score of 5 for effortlessness, indicating unanimous agreement that Loose Ends is easy to use. One caveat to this finding is that our evaluators, as experts in computationally engaged storytelling, were already familiar with several similar systems and used to putting up with unpolished interfaces. Consequently, this finding might not generalize well to other player populations. Some Players Want Prose-Level Suggestions. Evaluators used the freely editable text boxes in the story transcript in very different ways. Two evaluators (E1 and E4) mostly used them to write extended narration of high-level plot events, as we originally envisioned. One (E2) ignored the text boxes almost entirely. One (E3) used the text boxes to write short notes-to-self about why they chose certain actions from a storytelling perspective—a use-case we did not envision. And one (E5) initially used the text boxes to add terse narrative details for later expansion into full narration, but then stopped using them partway through play. In qualitative think-out-loud remarks and interview responses, two evaluators (E1 and E2) both indicated that they wanted assistance in coming up with potential details for how certain high-level actions could have been narrated. E2 in particular (who made almost no use of the text boxes) stated that they would have found this additional narration-level support especially helpful. Altogether, under the cognitive process model of writing (Flower and Hayes 1981; Gero et al. 2022), we find that Loose Ends currently provides assistance mostly at the planning stage, specifically in the creation of plot outlines. Expansion of support to later stages of the writing process represents a potential direction for future work. Storyworld Inconsistencies Stand Out. The current version of Loose Ends makes use of a stateless, naïvely random action suggestion generator rather than a full-fledged social simulation to generate candidate action suggestions. Character relationship state is not tracked anywhere besides in storytelling goals related to friendship and rivalry, and most action types can be suggested between any pair of characters regardless of these characters’ current relationship state. This leads to occasional generation of action suggestions that seem nonsensical from the perspective of a player who is tracking character relationship state mentally. Three evaluators (E3-E5) commented at least once on this perceived occasional lack of consistency as a detriment to the overall storytelling experience in Loose Ends. This finding underscores the importance of storyworld consistency maintenance features for storytelling support—as suggested by several past studies, including Kreminski et al. (2019) and Calderwood et al. (2020). In the future, we intend to extend Loose Ends to use a more sophisticated suggestion generation mechanism that tracks substantially more character relationship state, hopefully alleviating this problem. Common Feature Requests. Three evaluators (E1, E3, and E4) mentioned wanting to filter action suggestions to only display actions with particular characteristics, such as those of a particular event type or those involving particular characters. Three evaluators (E1, E3, and E4) mentioned a desire to express a temporary focus on a specific storytelling goal, so that the system would prioritize action suggestions that would advance this goal. Four evaluators (E1-E3 and E5) expressed a desire to minimize complete storytelling goals without removing them, in order to free up Evaluation Limitations. Our evaluation of Loose Ends is limited in several ways, particularly in terms of how evaluators were selected. Because evaluators were acquainted with the authors, some bias toward positive assessment of Loose Ends is likely; because evaluators were experts in mixed-initiative storytelling rather than novices, they likely found the system easier to use than novices might; and because the number of evaluators we employed is small, the quantitative results of evaluation might not generalize well. Comparison of our results to those from early-stage playtesting of Why Are We Like This? is still possible to some extent, since similar evaluators (interactive storytelling researchers who knew the system’s creators) were employed in WAWLT playtesting. However, a larger user study with players who are not researcher-level experts in mixed-initiative storytelling should be conducted in the future to determine more conclusively whether Loose Ends effectively supports storytelling among a more general audience. Conclusion Preliminary evaluation of Loose Ends, a novel mixed-initiative creative interface for storytelling, suggests that it may be able to preserve the desirable sense of coauthorship present in earlier systems while mitigating player-perceived narrative directionlessness. We hope that the formalization of jointly human- and machine-understandable storytelling goals presented here, and the idea of a mixed-initiative storytelling partner that can explicitly reason about and suggest high-level plot directions for a story (in addition to immediate continuations), will be taken up and further developed in the next generation of MICIs for storytelling support. Appendix A: Example Storytelling Goal Template Storytelling goal templates in Loose Ends are event sequence matchers written in the domain-specific logic programming language Winnow. Below is the code that implements the bondOverSharedDislike storytelling goal. \[ \text{(pattern bondOverSharedDislike)} \text{(event ?e1 where eventType: formGrudge, actor: ?c1, target: ?c3)} \text{(event ?e2 where eventType: formGrudge, actor: ?c2, target: ?c3)} \text{(event ?e3 where tag: friendly, actor: ?c1, target: ?c2)} \text{(event ?e4 where tag: friendly, actor: ?c2, target: ?c1)} \text{(unless-event where eventType: abandonGrudge, actor: ?c1, target: ?c3)} \text{(unless-event where eventType: abandonGrudge, actor: ?c2, target: ?c3))} \] Goals of this type match a sequence of four plot events (successively bound to the variables \(\text{?e1}\) through \(\text{?e4}\)), involving three characters \(\text{?c1}\) through \(\text{?c3}\), with arbitrarily many unrelated events interleaved between. Specifically, the goal will be satisfied if two characters \(\text{?c1}\) and \(\text{?c2}\) both form grudges on the same character \(\text{?c3}\), and two reciprocally friendly interactions then happen between \(\text{?c1}\) and \(\text{?c2}\) (without either character first abandoning their grudge on \(\text{?c3}\)). For more information on the semantics of the Winnow language, see Kreminski, Dickinson, and Mateas (2021). Appendix B: Example Output Story The following text is an example story created by one of our evaluators. Each plot event is presented as a terse system-generated description in bold, followed by non-bold narration of the event written by the evaluator during play. ``` Aidan inviteIntoGroup Bella. Aiden meets Bella at an art opening, and invites her to a critique group. Emily rejectSuperiority Bella. Emily hears Bella’s in Aidan’s cool critique group, and is envious, but says Bella’s not good enough to be in it. Emily beginMajorWork. Emily starts work on a collage work that’s a thinly veiled-critique of Aidan and Bella’s group. Bella formGrudge Emily. Bella hears what Emily’s new piece is about, and doesn’t like it. Bella sendPostcard Aidan. Bella sends a postcard to Aidan about her new piece that’s been going through the critique group. Aidan formFriendship Bella. Aidan and Bella become good friends through the group / shows / etc. Emily receivePoorReview. Emily unveils her collage work, and it gets panned in the local art review, whose in the pocket of Big Aidan. Emily askForHelp Bella. Emily asks Bella for some help winning over the local art critics. Emily worryAboutMajorWork. Emily worries that Bella’s help won’t be enough to increase her reputation with the local art critics. Emily makeProgressOnMajorWork. The collage grows to a series of collages about how messed up Aidan and Bella’s art group is. Emily worryAboutMajorWork. Emily worries that despite the elaboration of the theme, it’s still too oblique to ring home for Aidan and Bella. Bella formFriendship Aidan. Aidan and Bella become good friends through the group / shows / etc. Emily makeProgressOnMajorWork. Emily continues making the collage series. Emily makeProgressOnMajorWork. Emily continues making the collage series and secures a venue for the opening show for the series. Emily finishMajorWork. She finishes the collages, mounts the show, and has the opening. Emily receiveGoodReview. The show is a great hit, and the critics love it! Bella shunFromGroup Emily. Bella reads the review, and finds out what the show was about, and says “you’ll never be part of our group” ``` Emily forms a grudge against Bella. Emily is like “you’re group is dumb, your art is facile, I don’t care.” Bella begins her major work. Bella comes up with a collage of her own about Emily. Cam buys lunch for Devin. Meanwhile, the two artists totally outside the mess up politics of neo-collagists have lunch. Devin apologizes to Cam. Devin apologizes for not coming to Cam’s show...he was taking in this new collage form that just emerged...very experimental...very cool Bella worries about major work. Bella’s like “this collage stuff is actually really hard to make concrete statements about people with...damn is Emily actually a better artist than me?? Impossible!” Bella makes progress on major work. “Yeah impossible, this is ok...I guess....kinda” Bella receives negative feedback from Devin. Devin sees the work in progress and is like “are you biting on Emily’s style? This is derivative.” Bella complain about major work. Bella complains to Aidan about her collage piece, and that it’s more trouble than its worth. Cam sends postcard to Devin. Cam sends a postcard with “wow, heard about the sick burn on Bella...haha” Aidan apologizes to Bella. “I’m sorry, I shouldn’t have introduced this conflict into your life...collage is a tempestuous medium, you should only approach it with pure intentions.” Bella makes progress on major work. Bella does. Bella makes progress on major work. Bella makes even more progress on the collage, but it’s becoming more about...the travails and temptations of envy and jealousy and cliques. Bella finishes major work. Shows the work, and invites Emily to see how she’s healed and moved on, and wants to bury the hatchet. Bella receives an award. Bella unexpectedly wins an award for her artwork, igniting a fresh, even more potent round of jealousy from Emily. Acknowledgements Max Kreminski conducted part of this research while in residence at Stochastic Labs. References
{"Source-Url": "https://ojs.aaai.org/index.php/AIIDE/article/download/21955/21724", "len_cl100k_base": 8526, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28475, "total-output-tokens": 10360, "length": "2e13", "weborganizer": {"__label__adult": 0.0026092529296875, "__label__art_design": 0.0281829833984375, "__label__crime_law": 0.0015020370483398438, "__label__education_jobs": 0.037078857421875, "__label__entertainment": 0.01276397705078125, "__label__fashion_beauty": 0.0015869140625, "__label__finance_business": 0.0014247894287109375, "__label__food_dining": 0.002330780029296875, "__label__games": 0.198486328125, "__label__hardware": 0.0019855499267578125, "__label__health": 0.0021228790283203125, "__label__history": 0.0023250579833984375, "__label__home_hobbies": 0.0005235671997070312, "__label__industrial": 0.001220703125, "__label__literature": 0.06231689453125, "__label__politics": 0.00160980224609375, "__label__religion": 0.0030612945556640625, "__label__science_tech": 0.2266845703125, "__label__social_life": 0.0012950897216796875, "__label__software": 0.048919677734375, "__label__software_dev": 0.357177734375, "__label__sports_fitness": 0.002399444580078125, "__label__transportation": 0.00152587890625, "__label__travel": 0.0008273124694824219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45216, 0.01942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45216, 0.25797]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45216, 0.93451]], "google_gemma-3-12b-it_contains_pii": [[0, 4849, false], [4849, 9221, null], [9221, 15792, null], [15792, 20433, null], [20433, 26445, null], [26445, 33103, null], [33103, 38432, null], [38432, 44138, null], [44138, 45216, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4849, true], [4849, 9221, null], [9221, 15792, null], [15792, 20433, null], [20433, 26445, null], [26445, 33103, null], [33103, 38432, null], [38432, 44138, null], [44138, 45216, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45216, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45216, null]], "pdf_page_numbers": [[0, 4849, 1], [4849, 9221, 2], [9221, 15792, 3], [15792, 20433, 4], [20433, 26445, 5], [26445, 33103, 6], [33103, 38432, 7], [38432, 44138, 8], [44138, 45216, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45216, 0.05172]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
4c279d5086456a68808a8aa77bd68a650983749c
Crowdsourcing semantic content: a model and two applications Angelo Di Iorio¹, Alberto Musetti¹, Silvio Peroni¹, Fabio Vitali¹ ¹ Department of Computer Science, University of Bologna, Bologna, Italy diiorio@cs.unibo.it, musetti@cs.unibo.it, speroni@cs.unibo.it, fabio@cs.unibo.it Abstract. While the original design of wikis was mainly focused on a completely open free-form text model, semantic wikis have since moved towards a more structured model for editing: users are driven to create ontological data in addition to text by using ad-hoc editing interfaces. This paper introduces OWiki, a framework for creating ontological content within not-natively-semantic wikis. Ontology-driven forms and templates are the key concepts of the system, that allows even inexpert users to create consistent semantic data with little effort. Multiple and very different instances of OWiki are presented here. The expressive power and flexibility of OWiki proved to be the right trade-off to deploy the authoring environments for such very different domains, ensuring at the same time editing freedom and semantic data consistency. Keywords: authoring, community, interfaces, ontologies, templates. I. INTRODUCTION The explosion of social software tools has changed the way most users access the World Wide Web. Even inexpert users can now publish their content with a few clicks and do not need to master complex technologies or to use professional tools. This content is primarily targeted to being consumed by human users, as in YouTube videos, Twitter messages, FaceBook posts and so on. The creation of semantic web content – available for automatic search, classification and reasoning – is much more difficult and time-consuming. Technical competencies and domain-specific knowledge, in fact, are still required in authors. The shift of the Web from a human-understandable to a machine-understandable platform, as envisioned by the Semantic Web community [1], is far from being complete. The term “lowercase semantic web” [2] has been coined to indicate research efforts aiming at bridging the gap between simplified authoring and semantic web data. Such lowercase semantic web is not an alternative of the uppercase “Semantic Web”, but rather an intermediate step towards the same goal. While the Semantic Web aims at bringing full-fledged reasoning capabilities to intelligent software, the lowercase “semantic web” aims at encoding semantic data that can be accessed by everyday software and, above all, can be created by unsophisticated users. Semantic wikis play a leading role in such a scenario. Semantic wikis are enhanced wikis that allow users to decorate pages with semantic data by using simplified interfaces and/or specialized wiki syntaxes. They provide users with sophisticated searching and analysis facilities, and maintain in full the original open editing philosophy of wikis in everything but the form in which the content is created. Many semantic wikis, though, are essentially designed for scholars and experts in semantic technologies, and are still difficult to be used by average wiki contributors with no technical expertise. Forms are often used to mitigate this issue, providing an intuitive interface that is well known to most computer users. Unfortunately semantic forms as they exist currently do not guarantee the simplicity and ease of use that average users may expect. In most cases, in fact, these forms use generic text fields that are not fit to the domain the wiki is used for, or that only include a limited set of interface widgets. In this paper we introduce a methodology and a tool to simplify the process of authoring semantic web data through wikis, and we present two very different environments where that tool was delivered. The overall approach relies on ontologies and allows even inexpert users to create semantic data with little effort. The tool, called OWiki, uses Semantic Web technologies to handle the wiki knowledge-base and MediaWiki forms and templates to deliver intuitive interfaces to the final users. Forms and infobox templates are automatically generated from ontological data, and are completely customizable to change the nature, structure and constraints of the form widgets. The paper is structured as follows: Section II gives an overview of the main approaches to semantic wikis; the following one, Section III, presents the OWiki approach, focusing on its ontologies and their application in this context, while Section IV goes into the details of the use-case and the internals of the system are discussed in Section V. Two different instances of OWiki are presented in the last part of the paper, before drawing some conclusions: Section VI introduces BYOG, a collaborative environment for creating and delivering customized touristic guides, while Section VII presents ACUMEWiki, a shared platform for creating multi-disciplinary vocabularies and concept maps. II. RELATED WORKS: SEMANTIC WIKIS AND ONTOLOGIES The integration and interaction between ontologies and wikis is a hot research topic. Several approaches to semantic wikis have been developed to bring together the benefits of the free editing philosophy of wikis and ontological data. Semantic wikis can be organized into two main categories according to their connections with the ontologies: “wikis for ontologies” and “ontologies for wikis” [3]. In the first case, the wiki is used as a serialization of the ontology: each concept is mapped into a page and typed links are used to represent object properties. Such a model has been adopted by most semantic wikis. SemanticMediaWiki [4] is undoubtedly the most relevant The idea is to exploit ontologies to create and maintain semantic content. In fact, some researchers have also proposed SMW-mOWL, a “meta-model extension of SemanticMediaWiki”. SMW-mOWL is a way to encode an entire OWL ontology into a wiki by exploiting “semantic templates”. Basically it consists of storing ontology elements as template instances: classes, anonymous classes, properties, restrictions, individuals are all represented as templates, with a direct mapping between SMW-mOWL and OWL-AS. SMW-mOWL improves the ontological expressiveness of SemanticMediaWiki. Moreover, such an approach makes it easier to edit ontologies within the wiki itself as templates can be edited using the form-based interfaces of the SemanticForms extension [5]. On the other hand, the SMW-mOWL metamodel is conceptually difficult and requires high expertise to be fully exploited. The second category of semantic wikis, based on the principle of “ontologies for wikis”, includes all those wikis that are actually built on top of ontological foundations. The idea is to exploit ontologies to create and maintain consistent semantic data within a wiki so that sophisticated analysis, queries and classifications can be performed on its content. IkeWiki [6] was one of the first wikis to adopt this approach. Its deployment starts by loading an OWL ontology into the system that is automatically translated into a set of wiki pages and typed links. Multiple interfaces are provided to the users for editing the plain wiki content, adding new metadata or tagging pages. IkeWiki strongly relies on Semantic Web technologies: it even includes a Jena OWL repository and a SPARQL engine used for navigation, queries and display of the semantic content of the wiki. SweetWiki [3] implements a user-friendly ontology tool designed for both expert and non-expert users. The system is characterized by two aspects: the strong connection with the ontologies and the provision of Ajax-based interfaces for editing content and metadata. SweetWiki defines a “Wiki Object Model”, i.e. an ontology describing the wiki structure. Concepts like “document”, “page”, “link”, “version”, “attachment” are all codified in an OWL file that is accessed and manipulated through the wiki itself. These concepts are made explicit in SweetWiki, although they are usually hard-coded in most semantic wikis. SweetWiki also allows users to import external ontologies and, once again, to access and manipulate those ontologies through the wiki interfaces. Finally, the system includes a third ontology that is dynamically created by the users through “assisted social tagging”: users can add metadata to any page and can put pages in relation. These metadata values form a folksonomy that, on the one hand, is freely editable by users and, on the other, is built on top of ontological data. The interface for tagging, in fact, suggests consistent metadata by exploiting SPARQL queries and autocompletion features. Finally, UFOWiki [7] is another project that aims at integrating wikis, ontologies and forms. UFOWiki is a wiki farm, i.e. a server that allows users to setup and deploy semantic wikis. Multiple wikis are deployed by the same farm, so that they can share (ontological) data in a distributed environment. The overall content is stored in a centralized repository as RDF triples that express both the actual content of each page and its metadata. Multiple ontologies are used to model these data. The SIOC ontology (http://sioc-project.org) is used to describe wiki pages and users’ actions, helped by a domain ontology that can be imported and mapped into the wiki. Users are also provided with a combination of plain-text editors and forms to modify the ontology within the wiki. The UFOWiki forms are generated on-the-fly starting from the mapping between the classes and properties of the ontology and the fields and types of the form. The wiki administrators are in charge of setting this mapping through a graphical and intuitive interface. The high configurability of UFOWiki forms is one of the most relevant features, but also the ontological expressiveness is another one. In fact, UFOWiki allows users to create complex ontology instances and relations within a single page. While most other wikis only create assertions whose subject is represented by the subject of the page containing that assertion, UFOWiki allows users to associate sub-forms to classes of the ontology and to handle these parts as separate resources. The result is a more fine-grained control over the ontology and its populating process. III. OWIKI: ONTOLOGY-DRIVEN GENERATION OF TEMPLATES AND FORMS FOR SEMANTIC WIKIS OWiki is Gaffe-based [8] extension of MediaWiki that supports users in creating and editing semantic data. The basic idea of OWiki is to exploit ontologies and MediaWiki editing/viewing facilities to simplify the process of authoring semantic wiki content. In particular, OWiki exploits MediaWiki templates, infoboxes and forms. A template is set of pair key-value, edited as a record and usually formatted as a table in the final wiki page. Templates are particularly useful to store structured information: very easy to edit, disconnected from the final formatting of a page, very easy to search, and so on. Templates are defined in special pages that can be referenced from other pages. These pages include fragments with the same structure of the template but filled with instance data. The template-based component of a page is also called infobox. OWiki exploits ontologies to represent the (semantic) knowledge-base of a wiki and templates to display and edit that ontology through the wiki itself. The integration and interaction between ontologies and templates can be summarized in two points: - each class of the ontology is associated to a template-page. Each property is mapped into a key of the infobox; - each instance of that class is represented by a page associated to that template. Each line in the infobox then contains the value of a property for that instance. Data properties are displayed as simple text while object properties are displayed as links to other pages (representing other instances of the ontology). Currently OWiki does map all instances of the domain ontology into wiki pages and all properties into lines of the infoboxes. We are currently extending the engine in order to also allow users to select a subset of concepts to be represented by wiki pages, and set of properties to be shown in those pages. OWiki templates are actually transparent to users. In fact, each template is associated to a form that allows users to create and edit the relative instances. Users do not modify the templates directly but they only access specialized form fields. The crucial point is that even forms are generated on-the-fly from ontological data. OWiki also includes a GUI ontology describing widgets and interface elements. The concepts and relations of the domain ontology – that is the ontology in which metadata content is specified – are mapped into form elements that are delivered to the final user. During the installation phase OWiki creates a basic set of forms by merging the domain ontology with the GUI one. At the editing phase, the system shows a very basic form and saves it as a special page (template). This page can then be organized as a new form by adding dynamic behaviours, moving buttons, changing the field order and so on. Before describing the internal architecture of the system, it is worth spending few more words about the way OWiki uses ontologies. In fact, the extensive usage of ontologies makes it possible (i) to make OWiki independent on the domain it is used for, (ii) to easily customize forms and templates, and (iii) to fully describe the evolution of a wiki page and its semantic content. A. Using ontologies to model the domain In OWiki the entire domain of discourse – i.e., all the topics each page talks about – is handled by using an OWL ontology, called domain ontology. Two different kinds of classes exist in this ontology: those – page-domain classes – strictly related to articles and pages visualized by the wiki, and other ones – data-domain classes – that define additional data around the former ones. As we have previously said, each page-domain individual results in a wiki page containing text content (the content is now stored in the MediaWiki internal database) and all semantic data directly related to that individual. Fig. 1 shows a page about a particular beer (available at http://owiki.web.cs.unibo.it) that contains a text description of it in the central page area, while in the right box there are all metadata about this particular beer. While some metadata, such as “Beer Alcoholic content” or “Beer Brewed by”, are proper to any beer directly, because they are defined by OWL data or object properties having the class Beer as domain, that is not true for others metadata, such as “Winner Award” and “Winner Awarded on”. In fact, those properties are handled using the data-domain class Awarding that represents an event, concerning a particular prize, in which a beer participated to in a specific time. The model behind the value of such properties for the beer shown in Fig. 1 is explained in the following excerpt (in Turtle syntax): ``` :carlsberg :awardingEuropean2007 a :Awarding ; :hasAward :europeanBeerAward ; :hasYear "2007" . ``` The values shown in the Carlsberg page are not directly extracted from the Carlsberg ontological individual: they are taken from the awarding event the Carlsberg beer participated to. Even if they are not directly represented as wiki pages, OWiki uses data-domain individuals to enrich even more the metadata of page-domain individuals. This enrichment needs to retrieve related data from non-page-domain-individuals making at least two steps on the RDF graph represented by the model. We call property flattening the visualization of those data-domain property values into a page-domain individual. B. Using ontologies to model the interface OWiki exploits ontologies to also model end-user interfaces. In particular, the system includes a GUI ontology identifying all the components of web forms. The system instantiates and merges that ontology with the domain one in order to generate the final forms. The definitive version of the GUI ontology is still under development but the core concepts and relations are stable and already tested in the current prototype. Separating the GUI ontology from the domain one has a two-fold goal: (1) generating a declarative description of the interface widgets that can be reused across multiple domains not being bounded to specific data and (2) allowing users to customize final interfaces by only changing the association between content and interface widgets. Note also that the GUI ontology can be designed once for all, while the domain one requires different expertise for different application scenarios. The GUI ontology defines two types of graphical elements: controllers and panels. Panels are containers for other elements (that can be panels, in turn) used to organize the overall interface, while controllers are single widgets allowing users to actually fill metadata. The main class of the ontology is OWikiForm. Instances of this class will be used to generate each form associated to each wiki page. Each instance will in fact contain either graphical elements or property values from the domain ontology. OWikiForms can contain simple or complex types of controllers. Simple types will be associated to data properties in the domain ontology, while complex type will be associated to object properties. Simple types model the basic form elements (TextField, ComboBox, CheckBox and RadioButtons) while complex types model constructs useful for mapping the GUI ontology to the domain one. In fact, there are two complex types: ConnectField and ObjectContainer. ConnectField model links to another wiki document. This will be finally used to provide users with auto-completion operations on corresponding form fields: when the user will fill this field, the system will suggest a set of linked documents she/he can choose from (or create a link to a completely new resource). These links are in fact derived from the relations in the domain input ontology. ObjectContainers are widgets that includes properties of a class non-directly linked to the one defining a particular page, including into documents data about other (related) subjects. This class implement what we illustrated previously as property flattening. IV. STUDYING OWIKI THROUGH A USE-CASE The main goal of OWiki is to simplify the creation of semantic data through and within wikis. The complexity of such metadata authoring process, in fact, is hidden behind the application in order to not force users to learn new interfaces and tools. They can easily create semantic data by exploiting forms and templates that are automatically generated from ontological data. In this section we explain with much details this generation process, clarifying how ontological data are converted into (customized) interfaces. Basically, the overall OWiki process consists of three steps: 1. ontology import and forms generation; 2. forms customization; 3. templates and data generation. A. From ontologies to forms The first step consists of importing the input domain ontology into the wiki. Let us consider a sample application we will discuss throughout the following sections: an OWiki demo installation describing beers, breweries, ingredients, etc. Fig. 2 shows some classes of a domain ontology suitable for such an application. Classes and properties are mapped into wiki pages following the schema briefly described in the previous section: each concept is mapped into a page and properties are expressed through templates. In particular, data properties become lines of templates infoboxes and object properties becomes typed links. Note that the overall status of the OWiki installation is consistent at this stage, assuming that domain input ontology was consistent. The process is in fact a straightforward translation of classes and relations into pages and links. The OWiki conversion process also produces forms to edit the ontological content. Forms are dynamically built by analysing the class properties of the imported ontology and by mapping each property in the proper element of the GUI interface. In the example, the class Beer defines three properties: name, type and alcohol content. According to the type of these properties OWiki generates text fields or radio buttons. The default element is a text field that allows any type of value. Since in the input ontology the only possible values of the property beer type are Ale, Lager and Pilsner, the system add to the form a RadioButton element specifying those values. For object properties OWiki chooses between two types of widgets according to their range: whether the range class is a descendant of the wiki class, the system adds a JTextField to the form; otherwise it adds an ObjectContainer. Since the Beer class has the object property brewed by with the Brewery class specified as range, for example, the system add to the form a widget that allows to include a link to a corresponding brewery page. This widget will also provide auto-completion features built on top of the relations expressed in the input ontology. Fig. 2. A graphical representation of the OWiki domain ontology about beers. A point is very important at this stage: there is a default mapping between classes of the domain ontology and elements in the GUI ontology based on the type of the properties. The name of a property or its meaning in a specific domain is not relevant. There is actually a configuration file that specifies, for each type, which widget to use and how to configure it. In the previous case, for instance, there was an association between enumerations and radio buttons. That mapping is deployed whenever a class has a property which may only have a finite set of values, regardless of the actual domain ontology. In fact, a change in the OWiki configuration file would be reflected in using a different widget for the same property. B. Forms customization and filling Furthermore OWiki includes a configuration interface that allows users to set a domain-specific mapping between the input (domain and GUI) ontologies, and to configure the overall organization of the form and its formatting properties. The first time a user edits a page, OWiki shows a basic form. The author can then organize a new form adding dynamic behaviours, moving buttons, changing fields order and so on. Fig. 3 shows a simple example of a customized forms: while the original form only listed a set of plain text-fields, this one is organized in panels and uses radio-buttons, images and dynamic widgets. Customization can happen at different level. The user can change colour, font, background of the text to increase the appeal and impact of the form; she/he can change the position and the order of the elements to increase the importance of certain data; she/he can change the optionality of the elements, their default values, and so on. The current implementation requires users to customize forms by editing an XML configuration file, through the wiki itself. Even if such an approach is not optimal, the internal architecture of the system relies on a strong distinction between the declarative description of the form (through the GUI ontology) and its actual delivery. That makes possible to implement a user-friendly and graphic environment to create and customize forms. One of the our future activities is the implementation of such an editor within the OWiki framework. ![Fig. 3. A customized form generated by OWiki.](image) ### C. From semantic data to templates and views Automatically-generated forms are finally exploited by the wiki users to actually insert data through the forms. As described in the previous section, data are stored as templates and templates are manipulated by forms in a transparent manner. Let us consider again the **Beer** class of the example. OWiki generates a form to create instances of that classes showing three main components: - a text field to insert the name of the beer - a radio-button to select the type of the beer. Values in the radio button are directly extracted from the domain ontology. - a text field to insert the brewery, that suggests breweries by exploiting information in the domain ontology. These components can even be organized in multiple panels. Once the user fills the form OWiki saves a template with the proper information. Infobox templates, in fact, are used to display metadata and to cluster information about the same document. Each infobox line correspond to a field in the form that, in turn, corresponds to a parameter and its value in the domain ontology. As expected, the data property of a class are displayed as simple text while the object property are displayed as links to other documents. The page corresponding to the Carlsberg beer in the example, that is an instance of the class **Beer** and has been edited via the corresponding form, will contain the following (partial) infobox: ``` {{Infobox Beer | hasoWikiNamePage=Carlsberg | Beer_brewedBy=[[Brewery:Carlsberg|Carlsberg]] | Beer_beerType=Lager | Beer_hasAlcoholicContent=2.5° - 4.5° | Hops_hasName=Galena | ... )) ``` Notice that the property **Beer_brewedBy** contains a link to the page **Carlsberg** that is now an instance of the **Brewery** class. Relations in the input ontology are then mapped into the links between pages. And the **Carlsberg** instance follows the same approach, being it described by the infobox: ``` {{Infobox Brewery | hasoWikiNamePage=Carlsberg | Brewery_hasAddress=Valby 11 DK - 2500, Copenhagen | Brewery_brews=[[Brewery:Carlsberg|Carlsberg]] }} ``` Some final considerations are worth about the consistency of OWiki. First of all, note that OWiki forms only work on the instances of the underlying ontology, without any impact on the classes and relations among them. The consequence is that, assuming that users do not corrupt infoboxes (that are anyway available in the source code of a wiki page), the overall ontology keeps being consistent. The OWiki instance is in fact consistent by construction with the domain and the GUI ontology and it is populated via forms in a controlled way. Thus, we can conclude – going back to the distinction between “wikis for ontologies” and “ontologies for wikis” proposed in the related works section – that OWiki currently belongs to the second group and does not properly use the wiki to build and update ontologies. In the future we also plan to investigate a further integration between the wiki and the ontology – and a further integration between the textual content of a wiki page and the relative infoboxes – in order to also use OWiki as a full-fledged simplified authoring environment for ontologies. ### V. THE ARCHITECTURE OF OWIKI OWiki is an integrated framework composed of three modules, delivered with different technologies: - a **MediaWiki extension**. It is a module integrated in MediaWiki written in PHP that adds OWiki facilities; - an **Ontology manager**. It is a Java web service that processes OWL ontologies to produce forms for editing metadata. This manager uses internally both Jena API (http://jena.sourceforge.net) and OWLAPI (http://owlapi.sourceforge.net); - an **Ajax-based interface**: a client-side module that allows users to actually insert data through the forms generated by the OWiki engine. The **PHP** OWiki module follows the same architecture of any MediaWiki extension: some scripts and methods are overridden to provide new features. In particular, the module implements a revised editor that initializes OWiki environment variables, sets the communication with the client and sets data necessary to store forms in the MediaWiki database without interfering with existing data. To manipulate ontologies, OWiki implements a web service that uses internally the Jena API. Jena is integrated with the Pellet reasoner (http://pellet.owldl.com), that is exploited to extract information about the instances in the ontology. Ranges of some properties, as well as their values, are in fact derived from subsumptions or other relations expressed in the ontology itself. The web-service actually generates templates from the ontological data, that are later sent to the PHP module and stored in the right place of the MediaWiki installation. The connection between the PHP and Java modules, and the core of the overall framework is the OWiki client. The client is a javascript application, based on Mootools (http://mootools.net), in charge of actually generating and delivering forms. It is strongly based on the Model-View-Controller (MVC) pattern and its internal architecture can be divided in four layers: - The Connection Layer manage the overall environment, the initialization phase and the communication between all other layers. - The Model Layer (Model of MVC) manages the data to be displayed on the page. It is composed of a factory that creates wrappers for each type of data and instantiates data from the ontology. - The LookAndFeel (View of MVC) manages the final representation of the form, containing atomic and complex widgets, manipulators and decorators. - The Interaction Layer (Controller of MVC) implements the logic of the application, the communication with the web-service, the generation of semantic data and the end-user interaction. VI. CREATE AND DELIVER COMMUNITY-DRIVEN TOURISTIC GUIDES Apart from the demo wiki about beers already mentioned, we have used OWiki to develop two instantiations of community-driven semantic wikis, BYOG (http://byog.web.cs.unibo.it) and the Epistemological Grid (for the European project ACUME2, http://acume2.web.cs.unibo.it), that will be illustrated in this section and in the following one. BYOG (Book Your Own Guide) is a web application that aims at creating wikis to write, share and print customized touristic guides according to user needs. Its main features are: - it uses crowdsourcing to write touristic guides, as well as content drawn from other sites; - it allows users to freely mesh-up free and structured content in itineraries and full guides; - it provides sophisticated social tagging mechanism; - it uses print-on-demand technologies to print batches of guidebooks, and have them sent directly to user-specified addresses; - it provides a free access to its content and also to a number of commercial services. The three key concepts handled by BYOG are: - **Point of Interest (PoI).** All text and metadata containers represent basic content entities. Each of them correspond to individual wiki pages. Each PoI correspond both to a particular wiki page and, as also shown in Fig. 4 to points on maps and they are identified by a flag icon (or something similar depending on the software used to visualize maps). Consequently, a PoI can have text, images, metadata and so on. - **Itineraries.** Itineraries are PoI containers, represented by ordered lists of PoIs that can be named, described and stored also as wiki pages. Each itinerary is also a polygon on maps having PoIs as vertexes and they can be created by hand, by selecting individual PoIs, generated automatically as a result of a search on the database, etc. As PoIs, the itineraries can have text and metadata associated. - **Guides.** A guide is an ordered list of itineraries (and schedules). Each guide contains the metadata needed to create a printed book (e.g., author, ISBN, price, etc.), apart from those specific to the guide itself. Whatever is applied to itineraries is applied to guides as well, with the exception that guides cannot directly contain PoIs. As PoIs and itineraries, guides also correspond to wiki pages. These three concepts are semantically described as subclasses of owl:Thing in a well-defined OWL ontology, that has the role of driving the overall content management process, according to the schema described in Section III. This ontology is in fact composed by two parts: a BYOG-specific one that models PoIs, Itineraries and Guides and a wiki-specific one that models wiki articles and content. The relation between these two components is very strong. Instances of the ontology are in fact mapped into wiki articles, their properties are displayed in infoboxes and indirect properties are flattened as explained in Section IIIA. Let us consider a very basic example: a user wants to create a guide with a single itinerary that includes some PoIs, each corresponding to a different hotel in the city of Bologna. The city of Bologna is in turn a PoI, previously included in the knowledge-base. When the user creates the wiki-page about a new hotel – say “Hotel Porta San Mamolo” – two actions are actually performed: (i) a page is added to the wiki, editable by all other users and (ii) a new instance of the class Hotel, subclass of the class PoI, is added to the underlying ontology. The wiki page is composed by two parts: a textual description of the PoI and an infobox that summarizes ontological data about that PoI and connections with other PoIs in the system. Similarly the page editor is composed by two parts: a text area, where users can freely modify the page content, and a form, where users can add structured information about the hotel. The form is generated automatically by OWiki, from the ontological description of the class Hotel. For instance, since the hotel is characterized by a name (a sequence of characters), a number of stars (an integer included from 1 to 5) and a location city (a link to another PoI), the form contains an input text (for the name), a graphical widget (for the hotel quality) and a select drop-down menu (to select the city where the hotel is located), Fig. 4 shows the editing interface for the PoI “Hotel Porta San Mamolo”. BYOG includes the declarative descriptions of completely different forms to provide support for completely different PoIs such as museums, monuments, restaurants or hotels. These descriptions are built by where several people have the task of defining common by the definition of a pair “term-value”. In a community by the user, whose points of intersection are represented network. The grid is a table of definitions of terms created epistemological grid for all the sub-projects of the thematic at least confesses and expresses the differences in meaning, that does not prevent communities to use such terms, but same terms by different disciplines. To provide a guidance clashes in vocabularies and the different uses of event the output of very different communities and experts is the researchers from various disciplines can communicate point of view, in order to build an infrastructure where both from a truly theoretical as well as from an applied of ACUME2 is to approach the concept of interfacing external web-sites and a printing-on-demand module, for framework is actually completed by other modules that are out of the scope of this work: a semantic engine, to search for the data in the domain ontology (see Section III for combining the form widgets modelled in the GUI ontology with the data in the domain ontology (see Section III for more details). When saving the page, the data inserted through the form are saved into the knowledge-base and shown in the infoxin. Similarly to the beer example discussed earlier, BYOG users can customize the form (if they have access permissions to do so) and change both object properties adding typed links to pages corresponding to PoIs or other objects in the ontology) or data properties (changing atomic values in the form). Once collected all PoIs, the user can switch to the guides editor, that allows him to create detailed travel guides. This editor is a highly interactive Ajax-based application, very intuitive and easy to be used by inexpert users. The BYOG framework is actually completed by other modules that are out of the scope of this work: a semantic engine, to search for detailed content, an importer, to generate PoIs from external web-sites and a printing-on-demand module, for the actual creation and delivery of physical guides. VII. USING OWIKI TO INTERFACE HETEROGENEOUS DOMAINS OF KNOWLEDGE ACUME2 is an European Thematic Network Project (ETNP) within the Socrates-Erasmus Programme to interface sciences, literature and humanities. The intention of ACUME2 is to approach the concept of interfacing both from a truly theoretical as well as from an applied point of view, in order to build an infrastructure where researchers from various disciplines can communicate without ambiguities. One of the problems encountered when collecting the output of very different communities and experts is the clashes in vocabularies and the different uses of event the same terms by different disciplines. To provide a guidance that does not prevent communities to use such terms, but at least confesses and expresses the differences in meaning, the main target of Acume2 has been the implementation of a epistemological grid for all the sub-projects of the thematic network. The grid is a table of definitions of terms created by the user, whose points of intersection are represented by the definition of a pair “term-value”. In a community where several people have the task of defining common terms according to different disciplines it is useful to have a common model that is able to define better the meaning of terms. It was soon realized that generating just a bunch of different definitions for the same terms would have provided very little insight on the real differences and similarities. For this reason it was established that comparing also how each term refers to others within the same discipline and outside would have been a much interesting way to bring light to similarities and differences. The aim of the epistemological grid is therefore to have a free-text description of the term for each discipline and each term, and in addition to generate a network of labelled references to other terms and concepts, again for each term and each discipline. The ACUME2 epistemological grid has been developed and finally deployed through an instantiation of OWiki, called AcumeWiki. The system, in fact, managed to provide exactly the right kind of technological infrastructure needed to perform connections and relations among terms. The balance between the wiki free editing model and the aided production of ontologically consistent data was particularly appreciated by researchers. In fact, several researchers from 12 research institutes and universities collaborated to create the epistemological grid composed of dozens of terms and hundreds of relations. Before going on discussing the deployment of AcumeWiki, it is necessary to briefly describe the conceptual models behind the epistemological grid. Two different conceptual models were considered and applied: the SKOS thesaurus, currently developed within the W3C framework (http://www.w3.org/TR/skos-reference), and the McLuhanian tetrad [9]. SKOS, or Simple Knowledge Organization System, is a family of formal languages designed for representation of thesauri, classification schemes, taxonomies, subject- heading systems, and any other type of structured controlled vocabulary. SKOS is built upon RDF and RDFS, and its main objective is to enable easy publication of controlled structured vocabularies for the Semantic Web. SKOS defines a “concept scheme” that allows the use of mainly hierarchical relations (such as broader, narrower, and related) that are relevant in analysing and classifying a concept space. The concept scheme can then be locally extended with new concepts referring to existing ones, e.g. as specializations of these. Thus we can obtain connections between the definitions terms inside the same discipline. The use of SKOS within the epistemological grid, therefore, provides support for the correct placement of each term within a specific semantic tree, and sheds light on their closest related terms. We were also suggested by our humanistic counterparts to adopt for term definitions the tetrad of effects originally devised by Marshall McLuhan for the effects of media [9]. McLuhan supplied a “definition scheme” that allows the expression of four specific relationships between the effects of media: enhance, retrieves, reverses into and obsolesces. The complexity of the various types of tetrad is generated by the different kinds of people and then the different analysis of the same concept. This approach can give as result some insights and considerations that are suggestive of the multiple nature and complementarity of each definition of the same term. Both SKOS and the McLuhan models were combined in AcumeWiki to improve the quality and richness of the epistemological grid. The OWiki domain-ontology (see Section IIIA for more details) was in fact composed by two parts: the imported SKOS ontology and the ontological representation of the McLuhan model. In particular, the ontology includes three main classes: a Document class corresponding to the wiki page, a Term class, corresponding to the term in the grid, and a Concept class, corresponding to the concepts in the tetrads. The class Concept is then further specialized in Enhance, Retrieves, ReversInto and Obsolesces. The Document class has an object property hasTerm that makes possible to associate a wiki page to each term in the grid (the co-domain is in fact the Term class) and, in turn, the Term class has an object property hasConcept to connect a term instance with the abstract concept it represents (the co-domain is in fact the Concept class). Relations among terms are expressed through the SKOS ontology while relations among concepts (and terms related to those concepts) are expressed through the tetrads. As a result, an entangled graph of relations among terms can be created by researchers, with their different competencies and interests. The AcumeWiki engine processes this ontology to automatically generate both the wiki interfaces and infoboxes connected to such properties, following the rules described in the previous sections. Fig. 5 shows a sample wiki page about a term. Fig. 5. Example of a page representing a term in the ACUME2 epistemological grid. The page is composed by a textual description and an infobox summarizing the ontological properties of the term. Relations with other terms are encoded as typed links to pages representing those terms. Note that the infobox is composed by three parts that respectively show the tetrads (in the top), the SKOS relations (in the middle) and some metadata about the page itself. In fact, we also added a basic set of Dublin Core (http://www.dublincore.org/documents/1998/09/dces) properties to provide basic documental metadata over the definitions. When editing the page, the main wiki editing textarea allows for the generation of the actual textual definition for each concept, and a number of labelled form elements provide a way to connect the term to others, both already existing and yet to be defined (as known, the wiki model can be easily used to create links to potential pages, i.e., pages that do not exist yet but of which we can foresee the nature and title already). Once the page is saved, the terms specified in the form elements become links to actual or yet-undefined terms that can be either explored or further defined in a never-ending expansion of the semantic space of the defined terms. VIII. CONCLUSIONS The strength of the OWiki approach relies on its connection with ontologies and its capability of providing users with simple but powerful interfaces. OWiki makes it possible, on the one hand, to create consistent ontological data and, on the other, to edit these data through intuitive forms and templates. The development of the tool is not complete yet. The current implementation is an integrated prototype that allows users to load and manipulate ontological data regardless of their application domain. A wider end-users evaluation of OWiki is the first item in our agenda. We plan to perform a quantitative and qualitative analysis of users’ involvement in using the tool and to compare OWiki with other ontology-based solutions. More functionalities will also be added in future releases of the system: a sophisticated interface for customizing forms, a larger set of widgets described in the GUI interface and integrable in the OWiki pages, and a more flexible importer. We also plan to study solutions for letting users to use either plain wiki textareas or templates to edit the same semantic data. ACKNOWLEDGMENTS The authors would like to thank Vita Fortunati, Claus Huittfeld, Dino Buzzetti and Ewa Sikora, involved in the Acume2 project. The authors also recognize endless credit to Silvia Duca and Valentina Bolognini for their previous works about Gaffe. REFERENCES
{"Source-Url": "http://speroni.web.cs.unibo.it/publications/di-iorio-2010-crowdsourcing-semantic-content.pdf", "len_cl100k_base": 9161, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25238, "total-output-tokens": 10069, "length": "2e13", "weborganizer": {"__label__adult": 0.0004336833953857422, "__label__art_design": 0.002323150634765625, "__label__crime_law": 0.0006275177001953125, "__label__education_jobs": 0.00519561767578125, "__label__entertainment": 0.0004353523254394531, "__label__fashion_beauty": 0.0002562999725341797, "__label__finance_business": 0.0005884170532226562, "__label__food_dining": 0.000522613525390625, "__label__games": 0.0014276504516601562, "__label__hardware": 0.0007348060607910156, "__label__health": 0.0006022453308105469, "__label__history": 0.0009889602661132812, "__label__home_hobbies": 0.0002090930938720703, "__label__industrial": 0.00040531158447265625, "__label__literature": 0.0019245147705078125, "__label__politics": 0.0005316734313964844, "__label__religion": 0.0008516311645507812, "__label__science_tech": 0.1915283203125, "__label__social_life": 0.0005192756652832031, "__label__software": 0.203125, "__label__software_dev": 0.58544921875, "__label__sports_fitness": 0.00026226043701171875, "__label__transportation": 0.0005731582641601562, "__label__travel": 0.0004930496215820312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46206, 0.00623]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46206, 0.54928]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46206, 0.89913]], "google_gemma-3-12b-it_contains_pii": [[0, 5675, false], [5675, 11125, null], [11125, 16515, null], [16515, 22288, null], [22288, 27736, null], [27736, 34048, null], [34048, 40802, null], [40802, 46206, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5675, true], [5675, 11125, null], [11125, 16515, null], [16515, 22288, null], [22288, 27736, null], [27736, 34048, null], [34048, 40802, null], [40802, 46206, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46206, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46206, null]], "pdf_page_numbers": [[0, 5675, 1], [5675, 11125, 2], [11125, 16515, 3], [16515, 22288, 4], [22288, 27736, 5], [27736, 34048, 6], [34048, 40802, 7], [40802, 46206, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46206, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
fa8bc5474ac2786779431f280cb44fa2c6cd8e49
End-to-end Deep Learning of Optimization Heuristics Citation for published version: Digital Object Identifier (DOI): 10.1109/PACT.2017.24 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: 2017 26th International Conference on Parallel Architectures and Compilation Techniques (PACT) General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. End-to-end Deep Learning of Optimization Heuristics Abstract Accurate automatic optimization heuristics are necessary for dealing with the complexity and diversity of modern hardware and software. Machine learning is a proven technique for learning such heuristics, but its success is bound by the quality of the features used. These features must be handcrafted by developers through a combination of expert domain knowledge and trial and error. This makes the quality of the final model directly dependent on the skill and available time of the system architect. Our work introduces a better way for building heuristics. We develop a deep neural network that learns heuristics over raw code, entirely without using code features. The neural network simultaneously constructs appropriate representations of the code and learns how best to optimize, removing the need for manual feature creation. Further, we show that our neural nets can transfer learning from one optimization problem to another, improving the accuracy of new models, without the help of human experts. We compare the effectiveness of our automatically generated heuristics against ones with features hand-picked by experts. We examine two challenging tasks: predicting optimal mapping for heterogeneous parallelism and GPU thread coarsening factors. In 89% of the cases, the quality of our fully automatic heuristics matches or surpasses that of state-of-the-art predictive models using hand-crafted features, providing on average 14% and 12% more performance with no human effort expended on designing features. 1. Introduction There are countless scenarios during the compilation and execution of a parallel program where decisions must be made as to how, or if, a particular optimization should be applied. Modern compilers and runtimes are rife with hand-coded heuristics which perform this decision making. The performance of parallel programs is thus dependent on the quality of these heuristics. Hand-written heuristics require expert knowledge, take a lot of time to construct, and in many cases lead to suboptimal decisions. Researchers have focused on machine learning as a means to constructing high quality heuristics that often outperform their handcrafted equivalents [1–4]. A predictive model is trained, using supervised machine learning, on empirical performance data and important quantifiable properties, or features, of representative programs. The model learns the correlation between these feature values and the optimization decision that maximizes performance. The learned correlations are used to predict the best optimization decisions for new programs. Previous works in this area were able to build machine learning based heuristics that outperform ones created manually by experts and did so with less effort [5, 6]. Still, experts are not completely removed from the design process, which is shown in Figure 1a. Selecting the appropriate features is a manual undertaking which requires a deep understanding of the system. The designer essentially decides which compile or runtime characteristics affect optimization decisions and expresses them in ways that make it easy to model their relationship to performance. Failing to identify an important feature has a negative effect on the resulting heuristic. For example, in [7] the authors discovered that [5] did not identify one such feature, causing performance to be 40% lower on average. To make heuristic construction fast and cheap, we must take humans out of the loop. While techniques for automatic feature generation from the compiler IR have been proposed in the past [8, 9], they do not solve the problem in a practical way. They are deeply embedded into the compiler, require expert knowledge to guide the generation, have to be repeated from scratch for every new heuristic, and their search time can be prohibitive. Our insight was that such costly approaches are not necessary any more. Deep learning techniques have shown astounding successes in identifying complex patterns and relationships in images [10, 11], audio [12], and even computer code [7, 13, 14]. We hypothesized that deep neural networks should be able to automatically extract features from source code. Our experiments showed that even this was a conservative target: with deep neural networks we can bypass static feature extraction and learn optimization heuristics directly on raw code. Figure 1b shows our proposed methodology. Instead of manually extracting features from input programs to generate training data, program code is used directly in the training data. Programs are fed through a series of neural networks which learn how code correlates with performance. Internally and without prior knowledge, the networks construct complex abstractions of the input program characteristics and correlations between those abstractions and performance. Our work replaces the need for compile-time or static code features, merging feature and heuristic construction into a single process of joint learning. Our system admits auxiliary features to describe information unavailable at compile time, such as the sizes of runtime input parameters. Beyond these optional inclusions, we are able to learn optimization heuristics without human supervision or guidance. By employing transfer learning [15], our approach is able to produce high quality heuristics even when learning on a small number of programs. The properties of the raw code that are abstracted by the beginning layers of our neural networks are mostly independent of the optimization problem. We reuse these parts of the network across heuristics, and, in the process, we speed up learning considerably. We evaluated our approach on two problems: heterogeneous device mapping and GPU thread coarsening. Good heuristics for these two problems are important for extracting performance from heterogeneous systems, and the fact that machine learning has been used before for heuristic construction for these problems allows direct comparison. Prior machine learning approaches resulted in good heuristics which extracted 73% and 79% of the available performance respectively but required extensive human effort to select the appropriate features. Nevertheless, our approach was able to outperform them by 14% and 12%, which indicates a better identification of important program characteristics, without any expert help. We make the following contributions: - We present a methodology for building compiler heuristics without any need for feature engineering. - A novel tool DeepTune for automatically constructing optimization heuristics without features. DeepTune outperforms existing state-of-the-art predictive models by 14% and 12% in two challenging optimization domains. - We apply, for the first time, transfer learning on compile-time and runtime optimizations, improving the heuristics by reusing training information across different optimization problems, even if they are completely unrelated. DeepTune is an end-to-end machine learning pipeline for optimization heuristics. Its primary input is the source code of a program to be optimized, and through a series of neural networks, it directly predicts the optimization which should be applied. By learning on source code, our approach is not tied to a specific compiler, platform, or optimization problem. The same design can be reused to build multiple heuristics. The most important innovation of DeepTune is that it forgoes the need for human experts to select and tune appropriate features. 2.1. System Overview Figure 2 provides an overview of the system. A source rewriter removes semantically irrelevant information (such as comments) from the source code of the target program and passes it to a language model. The language model converts the arbitrary length stream of code into a fixed length vector of real values which fully capture the properties and structure of the source, replacing the role of hand designed features. We then optionally concatenate this vector with auxiliary inputs, which allow passing additional data about runtime or architectural parameters to the model for heuristics which need more than just compile-time information. Finally, a standard feed-forward network is used to predict the best heuristic parameters to optimize the program. DeepTune is open source\(^1\). We implemented the model using Keras, with TensorFlow [16] and Theano [17] backends. 2.2. Language Model Learning effective representations of source code is a difficult task. A successful model must be able to: - derive semantic and syntactic patterns of a programming language entirely from sample codes; \(^1\)DeepTune is available at: http://chriscummins.cc/deeptune • identify the patterns and representation in source codes which are relevant to the task at hand; and • discriminate performance characteristics arising from potentially subtle differences in similar codes. To achieve this task, we employ state-of-the-art language modeling techniques, coupled with a series of generic, language agnostic code transformations. **Source Rewriter** To begin with, we apply a series of source normalizing transformations, similar to those of Cummins et al. [7]. These transformations, implemented as an LLVM pass, parse the AST, removing conditional compilation, then rebuild the input source code using a consistent code style and identifier naming scheme. The role of source normalization is to simplify the task of modeling source code by ensuring that trivial semantic differences in programs such as the choice of variable names or the insertion of comments do not affect the learned model. Figures 3b and 3a show the source rewriting applied to a simple program. **Sequence Encoder** We encode source code as a sequence of integers for interpretation by neural networks, where each integer is an index into a predetermined vocabulary. In [7], a character based vocabulary is used. This minimizes the size of the vocabulary, but leads to long sequences which are harder to extract structure from. In [18], a token based vocabulary is used. This leads to shorter sequences, but tokenizing real codes causes an explosion in the size of the vocabulary, as every identifier and literal must be represented uniquely. We designed a hybrid, partially tokenized approach. This allows common multi-character sequences such as `float` and `if` to be represented as unique vocabulary items, but literals and other infrequently used words to be encoded at the character level. We first assembled a candidate vocabulary $V_c$ for the OpenCL programming language containing the 208 data types, keywords, and language builts of the OpenCL specification. We then derived the subset of the candidate vocabulary $V \subseteq V_c$ which is required to encode a corpus of 45k lines of handwritten GPGPU benchmark suite kernels. Beginning with the first character in the corpus, our algorithm consumes the longest matching sequence from the candidate vocabulary. This process continues until every character in the corpus has been consumed. The resulting derived vocabulary consists of 128 symbols which we use to encode new program sources. Figure 3c shows the vocabulary derived for a single input source code Figure 3b. **Embedding** During encoding, tokens in the vocabulary are mapped to unique integer values, e.g. `float` $\rightarrow$ 0, `int` $\rightarrow$ 1. The integer values chosen are arbitrary, and offer a sparse data representation, meaning that a language model cannot infer the relationships between tokens based on their integer values. This is in contrast to the dense representations of other domains, such as pixels used in computer vision, which can be interpolated between to derive the differences between colors. ``` 1 //define Elements 2 __kernel void memset_kernel(__global char * mem_d, 3 short val, int number_bytes) 4 { 5 const int thread_id = get_global_id(0); 6 mem_d[thread_id] = val; 7 } 8 (a) An example, short OpenCL kernel, taken from Nvidia’s streamcluster. ``` ``` 1 __kernel void At__global char a, short b, int c) 2 { 3 const d = get_global_id(0); 4 a[d] = b; 5 } 6 (b) The streamcluster kernel after source rewriting. Variable and function names are normalized, comments removed, and code style enforced. ``` <table> <thead> <tr> <th>idx</th> <th>token</th> <th>idx</th> <th>token</th> <th>idx</th> <th>token</th> </tr> </thead> <tbody> <tr> <td>1</td> <td><code>__kernel</code></td> <td>10</td> <td><code>;</code></td> <td>19</td> <td><code>const</code></td> </tr> <tr> <td>2</td> <td><code>;</code></td> <td>11</td> <td><code>short</code></td> <td>20</td> <td><code>;</code></td> </tr> <tr> <td>3</td> <td><code>void</code></td> <td>12</td> <td><code>b</code></td> <td>21</td> <td><code>;</code></td> </tr> <tr> <td>4</td> <td><code>A</code></td> <td>13</td> <td><code>int</code></td> <td>22</td> <td><code>get_global_id</code></td> </tr> <tr> <td>5</td> <td><code>('</code></td> <td>14</td> <td><code>c</code></td> <td>23</td> <td><code>D</code></td> </tr> <tr> <td>6</td> <td><code>__global</code></td> <td>15</td> <td><code>;</code></td> <td>24</td> <td><code>;</code></td> </tr> <tr> <td>7</td> <td><code>char</code></td> <td>16</td> <td><code>;</code></td> <td>25</td> <td><code>;</code></td> </tr> <tr> <td>8</td> <td><code>a</code></td> <td>17</td> <td><code>;</code></td> <td>26</td> <td><code>;</code></td> </tr> <tr> <td>9</td> <td><code>a</code></td> <td>18</td> <td><code>;</code></td> <td>27</td> <td><code>;</code></td> </tr> </tbody> </table> (c) Derived vocabulary, ordered by their appearance in the input (b). The vocabulary maps tokens to integer indices. | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | <pad> | |----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|-----| | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | <pad> | (d) Indices encoded kernel sequence. Sequences may be padded to a fixed length by repeating an out-of-vocabulary integer (e.g. -1). **Figure 3:** Deriving a tokenized 1-of-k vocabulary encoding from an OpenCL source code. To mitigate this, we use an embedding, which translates tokens in a sparse, integer encoded vocabulary into a lower dimensional vector space, allowing semantically related tokens like `float` and `int` to be mapped to nearby points [19, 20]. An embedding layer maps each token in the integer encoded vocabulary to a vector of real values. Given a vocabulary size $V$ and embedding dimensionality $D$, an embedding matrix $W_F \in \mathbb{R}^{V \times D}$ is learned during training, so that an integer encoded sequences of tokens $t \in \mathbb{N}^L$ is mapped to the matrix $T \in \mathbb{R}^{L \times D}$. We use an embedding dimensionality $D = 64$. **Sequence Characterization** Once source codes have been encoded and translated into sequences of embedding vectors, neural networks are used to extract a fixed size vector which characterizes the entire source sequence. This is comparable to the hand engineered feature extractors used in existing approaches to predictive modeling, but is a learned process that occurs entirely— and automatically — within the hidden layers of the network. We use the the Long Short-Term Memory (LSTM) architecture [21] for sequence characterization. LSTMs implements a Recurrent Neural Network in which the activations of neurons are learned with respect not just to their current inputs, but to previous inputs in a sequence. Unlike regular recurrent networks in which the strength of learning decreases over time (a symptom of the vanishing gradients problem [22]), LSTMs employ a forget gate with a linear activation function, allowing them to retain activations for arbitrary durations. This makes them effective at learning complex relationships over long sequences [23], an especially important capability for modeling program code, as dependencies in sequences frequently occur over long ranges (for example, a variable may be declared as an argument to a function and used throughout). We use a two layer LSTM network. The network receives a sequence of embedding vectors, and returns a single output vector, characterizing the entire sequence. 2.3. Auxiliary Inputs We support an arbitrary number of additional real valued auxiliary inputs which each scalar of the heuristic model’s inputs is normalized to a mean 0 and standard deviation of 1: $x_i = \frac{x_i - \mu(X)}{\sigma(X)}$ which provide activations in the range [0, 1]. The weights are learned during training. The activation of each neuron in the output layer represents the model’s confidence that the corresponding decision is the correct one. We take the argmax of the output layer to find the decision with the largest activation. For example, for a binary optimization heuristic the final layer will consist of two neurons, and the predicted optimization is the neuron with the largest activation. 2.5. Training the network DeepTune is trained in the same manner as existing predictive model approaches, the key difference being that instead of having to manually create and extract features from programs, we simply use the raw program codes themselves. The model is trained with Stochastic Gradient Descent (SGD), using the Adam optimizer [27]. For training data $X_1 \ldots X_n$, SGD attempts to find the model parameters $\Theta$ that minimize the output of a loss function: $$\Theta = \arg \min_{\Theta} \frac{1}{n} \sum_{i=1}^{n} \ell(X_i, \Theta)$$ where loss function $\ell(x, \Theta)$ computes the logarithmic difference between the predicted and expected values. To reduce training time, batched together and are fed into the neural network simultaneously, reducing the frequency of costly weight updates during back-propagation. This requires that the inputs to the language model be the same length. To achieve this, we pad all sequences up to a fixed length of 1024 tokens using a special padding token. This allows matrices of $\text{batch}_\text{size} \times \text{max}_\text{seq}_\text{len}$ tokens to be processed simultaneously. We note that batching and padding sequences to a maximum length is only to improve training time. Once deployed for prediction, sequences do not need to be padded, allowing classification of arbitrary length codes. 3. Experimental Methodology We apply DeepTune to two heterogeneous compiler-based machine learning tasks and compare its performance to state-of-the-art approaches that use expert selected features. 3.1. Case Study A: OpenCL Heterogeneous Mapping OpenCL provides a platform-agnostic framework for heterogeneous parallelism. This allows a program written in OpenCL to execute transparently across a range of different devices, from CPUs to GPUs and FPGAs. Given a program and a choice of execution devices, the question then is on which device should we execute the program to maximize performance? State-of-the-art In [5], Grewe et al. develop a predictive model for mapping OpenCL kernels to the optimal device in CPU/GPU heterogeneous systems. They use supervised learning to construct decision trees, using a combination of static and dynamic kernel features. The static program features are extracted using a custom LLVM pass; the dynamic features are taken from the OpenCL runtime. **Expert Chosen Features** Table 1a shows the features used by their work. Each feature is an expression built upon the code and runtime metrics given in Table 1b. **Experimental Setup** We replicate the predictive model of Grewe et al. [5]. We replicated the experimental setup of [7] in which the experiments are extended to a larger set of 71 programs, summarized in Table 2a. The programs were evaluated on two CPU-GPU platforms, detailed in Table 3a. **DeepTune Configuration** Figure 4a shows the neural network configuration of DeepTune for the task of predicting optimal device mapping. We use the OpenCL kernel source code as input, and the two dynamic values workgroup size and data size available to the OpenCL runtime. **Model Evaluation** We use stratified 10-fold cross-validation to evaluate the quality of the predictive models [28]. Each program is randomly allocated into one of 10 equally-sized sets; the sets are balanced to maintain a distribution of instances from each class consistent with the full set. A model is trained on the programs from all but one of the sets, then tested on the programs of the unseen set. This process is repeated for each of the 10 sets, to construct a complete prediction over the whole dataset. ### 3.2. Case Study B: OpenCL Thread Coarsening Factor Thread coarsening is an optimization for parallel programs in which the operations of two or more threads are fused together. This optimization can prove beneficial on certain combinations of programs and architectures, for example programs with a large potential for Instruction Level Parallelism on Very Long Instruction Word architectures. **State-of-the-art** Magni et al. present a predictive model for OpenCL thread coarsening in [6]. They implement an iterative heuristic which determines whether a given program would Figure 5: Two approaches for predicting coarsening factor (CF) of OpenCL kernels. Magni \textit{et al.} reduce the multi-label classification problem to a series of binary decisions, by iteratively applying the optimization and computing new feature vectors. Our approach simply predicts the coarsening factor directly from the source code. **Benefit from Coarsening.** If yes, then the program is coarsened, and the process repeats, allowing further coarsening. In this manner, the problem is reduced from a multi-label classification problem into a series of binary decisions, shown in Figure 5a. They select from one of six possible coarsening factors: \((1, 2, 4, 8, 16, 32)\), divided into 5 binary choices. **Expert Chosen Features** Magni \textit{et al.} followed a very comprehensive feature engineering process. 17 candidate features were assembled from a previous study of performance counters [34], and computed theoretical values [35]. For each candidate feature they compute its coarsening delta, reflecting the change in each feature value caused by coarsening: \[ f_{\Delta} = \frac{f_{\text{after}} - f_{\text{before}}}{f_{\text{before}}} \] Then they use Principle Component Analysis (PCA) on the 34 candidates and selected the first 7 principle components, accounting for 95% of the variance in the feature space. **Experimental Setup** We replicate the experimental setup of Magni \textit{et al.} [6]. The thread coarsening optimization is evaluated on 17 programs, listed in Table 2b. Four different GPU architectures are used, listed in Table 3b. **DeepTune Configuration** Figure 4b shows the neural network configuration. We use the OpenCL kernel as input, and directly predict the coarsening factor. **Model Evaluation** Compared to Case Study A, the size of the evaluation is small. We use \textit{leave-one-out cross-validation} to evaluate the predictive models. For each program, a model is trained on data from all other programs and used to predict the coarsening factor of the excluded program. Because [6] does not describe the parameters of the neural network, we perform an additional, nested cross-validation process to find the optimal network parameters for the Magni \textit{et al.} model. For every program in the training set, we evaluate 48 combinations of network parameters. We select the best performing configuration from these 768 results to train a model for prediction on the excluded program. This nested cross-validation is repeated for each of the training sets. We do not perform this tuning of hyper-parameters for DeepTune. <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>BasicBlocks</td> <td>#. basic blocks</td> </tr> <tr> <td>Branches</td> <td>#. branches</td> </tr> <tr> <td>DivInsts</td> <td>#. divergent instructions</td> </tr> <tr> <td>DivRegionInsts</td> <td>#. instructions in divergent regions</td> </tr> <tr> <td>DivRegionInstsRatio</td> <td>#. instr. in divergent regions / total instructions</td> </tr> <tr> <td>DivRegions</td> <td>#. divergent regions</td> </tr> <tr> <td>TotInsts</td> <td>#. instructions</td> </tr> <tr> <td>FPInsts</td> <td>#. floating point instructions</td> </tr> <tr> <td>ILP</td> <td>average ILP / basic block</td> </tr> <tr> <td>Int/FP Inst Ratio</td> <td>#. branches</td> </tr> <tr> <td>IntInsts</td> <td>#. integer instructions</td> </tr> <tr> <td>MathFunctions</td> <td>#. match builtin functions</td> </tr> <tr> <td>MLP</td> <td>average MLP / basic block</td> </tr> <tr> <td>Loads</td> <td>#. loads</td> </tr> <tr> <td>Stores</td> <td>#. stores</td> </tr> <tr> <td>UniformLoads</td> <td>#. loads unaffected by coarsening direction</td> </tr> <tr> <td>Barriers</td> <td>#. barriers</td> </tr> </tbody> </table> Table 4: Candidate features used by Magni \textit{et al.} for predicting thread coarsening. From these values, they compute relative deltas for each iteration of coarsening, then use PCA for selection. <table> <thead> <tr> <th></th> <th>#. neurons</th> <th>#. parameters</th> </tr> </thead> <tbody> <tr> <td></td> <td>HM CF</td> <td>HM CF</td> </tr> <tr> <td>Embedding</td> <td>64 64</td> <td>8,256 8,256</td> </tr> <tr> <td>LSTM_1</td> <td>64 64</td> <td>33,024 33,024</td> </tr> <tr> <td>LSTM_2</td> <td>64 64</td> <td>33,024 33,024</td> </tr> <tr> <td>Concatenate</td> <td>64+2</td> <td></td> </tr> <tr> <td>Batch Norm</td> <td>66 64</td> <td>264 256</td> </tr> <tr> <td>DNN_1</td> <td>32 32</td> <td>2,144 2,080</td> </tr> <tr> <td>DNN_2</td> <td>2 6</td> <td>66 198</td> </tr> <tr> <td>Total</td> <td>76,778 76,838</td> <td></td> </tr> </tbody> </table> Table 5: The size and number of parameters of the DeepTune components of Figure 4, configured for heterogeneous mapping (HM) and coarsening factor (CF). 3.3. Comparison of Case Studies For the two different optimization heuristics, the authors arrived at very different predictive model designs, with very different features. By contrast, we take exactly the same approach for both problems. None of DeepTune’s parameters were tuned for the case studies presented above. Their settings represent conservative choices expected to work reasonably well for most scenarios. Table 5 shows the similarity of our models. The only difference between our network design is the auxiliary inputs for Case Study A and the different number of optimization decisions. The differences between DeepTune configurations is only two lines of code: the first, adding the two auxiliary inputs; the second, increasing the size of the output layer for Case Study B from two neurons to six. The description of these differences is larger than the differences themselves. 4. Experimental Results We evaluate the effectiveness of DeepTune for two distinct OpenCL optimization tasks: predicting the optimal device to use to run a given OpenCL program, and predicting thread coarsening factors. We first compare DeepTune against two expert-tuned predictive models, showing that DeepTune outperforms the state-of-the-art in both cases. We then show that by leveraging knowledge learned from training DeepTune for one heuristic, we can boost training for the other heuristic, further improving performance. Finally, we analyze the working mechanism of DeepTune. 4.1. Case Study A: OpenCL Heterogeneous Mapping Selecting the optimal execution device for OpenCL kernels is essential for maximizing performance. For a CPU/GPU heterogeneous system, this presents a binary choice. In this experiment, we compare our approach against a static single-device approach and the Grewe et al. predictive model. The static mapping selects the device which gave the best average case performance over all the programs. On the AMD platform, the best-performing device is the CPU; on the NVIDIA platform, it is the GPU. Figure 6 shows the accuracy of both predictive models and the static mapping approach for each of the benchmark suites. The static approach is accurate for only 58.8% of cases on AMD and 56.9% on NVIDIA. This suggests the need for choosing the execution device on a per program basis. The Grewe et al. model achieves an average accuracy of 73%, a significant improvement over the static mapping approach. By automatically extracting useful feature representations from the source code, DeepTune gives an average accuracy of 82%, an improvement over the other two schemes. Using the static mapping as a baseline, we compute the relative performance of each program using the device selected by the Grewe et al. and DeepTune models. Figure 7 shows these speedups. Both predictive models significantly outperform the static mapping; the Grewe et al. model achieves an average speedup of 2.91× on AMD and 1.26× on NVIDIA (geomean 1.18×). In 90% of cases, DeepTune matches or outperforms the predictions of the Grewe et al. model, achieving an average speedup of 3.34× on AMD and 1.41× on NVIDIA (geomean 1.31×). This 14% improvement in performance comes at a greatly reduced cost, requiring no intervention by humans. 4.2. Case Study B: OpenCL Thread Coarsening Factor Exploiting thread coarsening for OpenCL kernels is a difficult task. On average, coarsening slows programs down. The maximum speedup attainable by a perfect heuristic is 1.36×. Figure 8 shows speedups achieved by the Magni et al. and DeepTune models for all programs and platforms. We use as baseline the performance of programs without coarsening. On the four experimental platforms (AMD HD 5900, Tahiti 7970, NVIDIA GTX 480, and Tesla K20c), the Magni et al. model achieves average speedups of 1.21×, 1.01×, 0.86×, and 0.94×, respectively. DeepTune outperforms this, achieving speedups of 1.10×, 1.05×, 1.10×, and 0.99×. Some programs — especially those with large divergent regions or indirect memory accesses — respond very poorly to coarsening. No performance improvement is possible on the mvCoal and spmv programs. Both models fail to achieve positive average speedups on the NVIDIA Tesla K20c, because thread coarsening does not give performance gains for the majority of the programs on this platform. The disappointing results for both predictive models can be attributed to the small training program set used by Magni et al. (only 17 programs in total). As a result, the models suffer from sparse training data. Prior research has shown that data sparsity can be overcome using additional programs; in the following subsection we describe and test a novel strategy for training optimization heuristics on a small number of programs by exploiting knowledge learned from other optimization domains. 4.3. Transfer Learning Across Problem Domains There are inherent differences between the tasks of building heuristics for heterogeneous mapping and thread coarsening, evidenced by the contrasting choices of features and models in Grewe et al. and Magni et al. However, in both cases, the first role of DeepTune is to extract meaningful abstractions and representations of OpenCL code. Prior research in deep learning has shown that models trained on similar inputs for different tasks often share useful commonalities. The idea is that in neural network classification, information learned at the early layers of neural networks (i.e. closer to the input layer) will be useful for multiple tasks. The later the network layers are (i.e. closer to the output layer), the more specialized the layers become [36]. We hypothesized that this would be the case for DeepTune, enabling the novel transfer of information between models across different optimization domains. To test this, we extracted the language model — the Embedding, LSTM_1, and LSTM_2 layers — trained for the heterogeneous mapping task and transferred it over to the new task of thread coarsening. Since DeepTune keeps the same design for both optimization problems, this is as simple as copying the learned weights of the three layers. Then we trained the model as normal. As shown in Figure 8, our newly trained model, DeepTune-TL has improved performance for 3 of the 4 platforms: 1.17×, Figure 8: Speedups of predicted coarsening factors for each platform. DeepTune outperforms Magni et al. on three of the four platforms. Transfer learning improves DeepTune speedups further, by 16% on average. 1.23×, 1.14×, 0.93×, providing an average 12% performance improvement over Magni et al. In 81% of cases, the use of transfer learning matched or improved the optimization decisions of DeepTune, providing up to a 16% improvement in per platform performance. On the NVIDIA Tesla K20c, the platform for which no predictive model achieves positive average speedups, we match or improve performance in the majority of cases, but over-coarsening on three of the programs causes a modest reduction in average performance. We suspect that for this platform, further performance results are necessary due to its unusual optimization profile. 4.4. DeepTune Internal Activation States We have shown that DeepTune automatically outperforms state-of-the-art predictive models for which experts have invested a great amount of time in engineering features. In this subsection we attempt to illuminate the inner workings, using a single example from Case Study B: predicting the thread coarsening factor for Parboil’s mriQ benchmark on four different platforms. Figure 9 shows the DeepTune configuration, with visual overlays showing the internal state. From top to bottom, we begin first with the input, which is the 267 lines of OpenCL code for the mriQ kernel. This source code is preprocessed, formatted, and rewritten using variable and function renaming, shown in Figure 9b. The rewritten source code is tokenized and encoded in a 1-of-k vocabulary. Figure 9c shows the first 80 elements of this encoded sequence as a heatmap in which each cell’s color reflects its encoded value. The input, rewriting, and encoding is the same for each of the four platforms. The encoded sequences are then passed into the Embedding layer. This maps each token of the vocabulary to a point in a 64 dimension vector space. Embeddings are learned during training so as to cluster semantically related tokens together. As such, they may differ between the four platforms. Figure 9d shows a PCA projection of the embedding space for one of the platforms, showing multiple clusters of tokens. By honing in Figure 9: Visualizing the internal state of DeepTune when predicting coarsening factor for Parboil's mriQ benchmark on four different architectures. The activations in each layer of the four models increasingly diverge the lower down the network. on one of the clusters and annotating each point with its corresponding token, we see that the cluster contains the semantically related OpenCL address space modifiers __private, __global, and __read_only. Two layers of 64 LSTM neurons model the sequence of embeddings, with the neuron activations of the second layer being used to characterize the entire sequence. Figure 9e shows the neurons in this layer for each of the four platforms, using a red-blue heatmap to visualize the intensity of each activation. Comparing the activations between the four platforms, we note a number of neurons in the layer with different responses across platforms. This indicates that the language model is partly specialized to the target platform. As information flows through the network, the layers become progressively more specialized to the specific platform. We see this in Figure 9f, which shows the two layers of the heuristic model. The activations within these increasingly diverge. The mean variance of activations across platforms increases threefold compared to the language model, from 0.039 to 0.107. Even the activations of the AMD HD 5900 and AMD Tahiti 7970 platforms are dissimilar, despite the final predicted coarsening factor for both platforms being the same. In Figure 9g we take the largest activation of the output layer as the final predicted coarsening factor. For this particular program, a state-of-the-art model achieves 54% of the maximum performance. DeepTune achieves 99%. 5. Related Work Machine learning has emerged as a viable means in automatically constructing heuristics for code optimization [3, 4, 24, 37–39]. Its great advantage is that it can adapt to changing hardware platforms as it has no a priori assumptions about their behavior. The success of machine learning based code optimization has required having a set of high-quality features that can capture the important characteristics of the target program. Given that there is an infinite number of these potential features, finding the right set of features is a non-trivial, time-consuming task. Various forms of program features have been used in compiler-based machine learning. These include static code structures [40] and runtime information such as system load [41] and performance counters [42]. In compiler research, the feature sets used for predictive models are often provided without explanation and rarely is the quality of those features evaluated. More commonly, an initial large, high dimensional candidate feature space is pruned via feature selection [3], or projected into a lower dimensional space [43, 44]. FEAST employs a range of existing feature selection methods to select useful candidate features [45]. Unlike these approaches, DeepTune extracts features and reduces the dimensionality of the feature space completely internally and without expert guidance. Park et al. present a unique graph-based approach for feature representations [46]. They use a Support Vector Machine where the kernel is based on a graph similarity metric. Their technique still requires hand coded features at the basic block level, but thereafter, graph similarity against each of the training programs takes the place of global features. Being a kernel method, it requires that training data graphs be shipped with the compiler, which may not scale as the size of the training data grows with the number of instances, and some training programs may be very large. Finally, their graph matching metric is expensive, requiring $O(n^3)$ to compare against each training example. By contrast, our method does not need any hand built static code features, and the deployment memory footprint is constant and prediction time is linear in the length of the program, regardless of the size of the training set. A few methods have been proposed to automatically generate features from the compiler’s intermediate representation [8, 9]. These approaches closely tie the implementation of the predictive model to the compiler IR, which means changes to the IR will require modifications to the model. The work of [9] uses genetic programming to search for features, and required a huge grammar to be written, some 160kB in length. Although much of this can be created from templates, selecting the right range of capabilities and search space bias is non trivial and up to the expert. The work of [8] expresses the space of features via logic programming over relations that represent information from the IRs. It greedily searches for expressions that represent good features. However, their approach relies on expert selected relations, combinators and constraints to work. For both approaches, the search time may be significant. Cavazos et al. present a reaction-based predictive model for software-hardware co-design [47]. Their approach profiles the target program using several carefully selected compiler options to see how program runtime changes under these options for a given micro-architecture setting. They then use the program “reactions” to predict the best available application speedup. While their approach does not use static code features, developers must carefully select a few settings from a large number of candidate options for profiling, because poorly chosen options can significantly affect the quality of the model. Moreover, the program must be run several times before optimization, while our technique does not require the program to be profiled. In recent years, machine learning techniques have been employed to model and learn from program source code on various tasks. These include mining coding conventions [14] and idioms [13], API example code [48] and pseudo-code generation [49], and benchmark generation [7]. Our work is the first attempt to extend the already challenging task of modeling distributions over source code to learning distributions over source code with respect to code optimizations. Recently, deep neural networks [50] have been shown to be a powerful tool for feature engineering in various tasks including image recognition [10, 11] and audio processing [12]. In the field of compiler optimization, no work so far has applied deep neural networks for program feature generation and selection. Our work is the first to do so. 6. Conclusions Applying machine learning to compile-time and runtime optimizations requires generating features first. This is a time consuming process, it needs supervision by an expert, and even then we cannot be sure that the selected features are optimal. In this paper we present a novel tool for building optimization heuristics, DeepTune, which forgoes feature extraction entirely, relying on powerful language modeling techniques to automatically build complex and effective representations of programs directly from raw source code. The result translates into a huge reduction in development effort, improved heuristic performance, and more simple model designs. Our approach is fully automated. Using DeepTune, compiler developers no longer need to spend months using statistical methods and profile counters to select program features via trial and error. It is worth mentioning that we do not tailor our model design or parameters for the optimization task at hand, yet we achieve performance on par with and in most cases exceeding state-of-the-art predictive models. We used DeepTune to automatically construct heuristics for two challenging optimization problems: selecting the optimal execution device for OpenCL kernels, and selecting OpenCL thread coarsening factors. In both cases, we outperform state-of-the-art predictive models, achieving performance improvements of 16% and 12%, respectively. We have also shown that the DeepTune architecture allows us to exploit information learned from another optimization problem to give the learning a boost. Doing so provides up to a 16% performance improvement when training using a handful of training programs. We suspect that this approach will be useful for other optimization tasks for which training programs are a scarce resource. In future work, we will extend our heuristic construction approach by automatically learning dynamic features over raw data; apply unsupervised learning techniques [51] over unlabeled source code to further improve learned representations of programs; and deploy trained DeepTune heuristic models to low power embedded systems using optimization and compression of neural networks [52]. References A. Artifact description A.1. Abstract Our research artifact consists of interactive Jupyter notebooks. The notebooks enable users to replicate all experiments in the paper, evaluate results, and plot figures. A.2. Description A.2.1. Check-list (Artifact Meta Information) - **Run-time environment**: Ubuntu Linux and a web browser. - **Hardware**: Users with an NVIDIA GPU may enable CUDA support to speed up computation of experiments. - **Output**: Trained neural networks, predictive model evaluations, figures and tables from the paper. - **Experiment workflow**: Install and run Jupyter notebook server; interact with and observe results in web browser. - **Experiment customization**: Edit code and parameters in Jupyter notebooks. - **Publicly available?**: Yes, code and data. See: [https://chriscummins.cc/pact17/](https://chriscummins.cc/pact17/) A.2.2. How Delivered A publicly available git repository containing Jupyter notebooks and experimental data. A.3. Installation See [https://chriscummins.cc/pact17/](https://chriscummins.cc/pact17/) for instructions. The code directory contains the Jupyter notebooks. Following the build instructions described in code/README.md, the full installation process is: ``` $ ./bootstrap.sh | bash $ ./configure $ make ``` A.4. Experiment Workflow 1. Launch the Jupyter server using the command: `make run`. 2. In a web browser, navigate to `http://localhost:8000`. 3. Select a Jupyter notebook to open it. 4. Repeatedly press the play button (tooltip is “run cell, select below”) to step through each cell of the notebook. OR select “Kernel” > “Restart & Run All” from the menu to run all of the cells in order. A.5. Evaluation and Expected Result Code cells within Jupyter notebooks display their output inline, and may be compared against the values in the paper. Expected results are described in text cells. A.6. Experiment Customization The experiments are fully customizable. The Jupyter notebook can be edited “on the fly”. Simply type your changes into the cells and re-run them. For example, note that some of the code cells depend on the values of prior cells, so must be executed in sequence. Select “Kernel” > “Restart & Run All” from the menu to run all of the cells in order. A.7. Notes For more information about DeepTune, visit: [https://chriscummins.cc/deeptune](https://chriscummins.cc/deeptune) For more information about Artifact Evaluation, visit: [http://ctuning.org/ae](http://ctuning.org/ae)
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/37747688/AE_PACT_2017_paper_2.pdf", "len_cl100k_base": 10315, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 50120, "total-output-tokens": 13847, "length": "2e13", "weborganizer": {"__label__adult": 0.0004203319549560547, "__label__art_design": 0.00042724609375, "__label__crime_law": 0.00027370452880859375, "__label__education_jobs": 0.0005173683166503906, "__label__entertainment": 7.963180541992188e-05, "__label__fashion_beauty": 0.0001882314682006836, "__label__finance_business": 0.00020265579223632812, "__label__food_dining": 0.0003390312194824219, "__label__games": 0.0007119178771972656, "__label__hardware": 0.0018033981323242188, "__label__health": 0.00046944618225097656, "__label__history": 0.00023281574249267575, "__label__home_hobbies": 0.00011837482452392578, "__label__industrial": 0.00041961669921875, "__label__literature": 0.0002440214157104492, "__label__politics": 0.00022423267364501953, "__label__religion": 0.0005159378051757812, "__label__science_tech": 0.03448486328125, "__label__social_life": 8.809566497802734e-05, "__label__software": 0.0064697265625, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.0003364086151123047, "__label__transportation": 0.0005750656127929688, "__label__travel": 0.0002137422561645508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54501, 0.04465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54501, 0.30998]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54501, 0.86365]], "google_gemma-3-12b-it_contains_pii": [[0, 1381, false], [1381, 5812, null], [5812, 10177, null], [10177, 16177, null], [16177, 20036, null], [20036, 22094, null], [22094, 26857, null], [26857, 31006, null], [31006, 33124, null], [33124, 35415, null], [35415, 38170, null], [38170, 44138, null], [44138, 49944, null], [49944, 52014, null], [52014, 54501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1381, true], [1381, 5812, null], [5812, 10177, null], [10177, 16177, null], [16177, 20036, null], [20036, 22094, null], [22094, 26857, null], [26857, 31006, null], [31006, 33124, null], [33124, 35415, null], [35415, 38170, null], [38170, 44138, null], [44138, 49944, null], [49944, 52014, null], [52014, 54501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54501, null]], "pdf_page_numbers": [[0, 1381, 1], [1381, 5812, 2], [5812, 10177, 3], [10177, 16177, 4], [16177, 20036, 5], [20036, 22094, 6], [22094, 26857, 7], [26857, 31006, 8], [31006, 33124, 9], [33124, 35415, 10], [35415, 38170, 11], [38170, 44138, 12], [44138, 49944, 13], [49944, 52014, 14], [52014, 54501, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54501, 0.15493]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
f012f4213cf508aa223ca35397be26d04d92e9e1
Platform LSF Version 9 Release 1.3 Foundations IBM Platform LSF Version 9 Release 1.3 Foundations IBM First edition This edition applies to version 9, release 1 of IBM Platform LSF (product number 5725G82) and to all subsequent releases and modifications until otherwise indicated in new editions. Significant changes or additions to the text and illustrations are indicated by a vertical line (|) to the left of the change. If you find an error in any Platform Computing documentation, or you have a suggestion for improving it, please let us know. In the IBM Knowledge Center add your comments and feedback to any topic. You can also send your suggestions, comments and questions to the following email address: pcdcom@ca.ibm.com Be sure include the publication title and order number, and, if applicable, the specific location of the information about which you have comments (for example, a page number or a browser URL). When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. ## Contents **Chapter 1. IBM Platform LSF: An Overview** - Introduction to IBM Platform LSF ..... 1 - LSF cluster components ..... 2 **Chapter 2. Inside an LSF Cluster** ..... 5 - LSF daemons and processes ..... 5 - LSF cluster communications paths ..... 7 - Fault tolerance ..... 7 - Security ..... 9 **Chapter 3. Inside Workload Management** ..... 11 - Job life cycle ..... 11 **Chapter 4. LSF with EGO Enabled** ..... 17 - EGO component overview ..... 17 - Resources ..... 18 - Sharing of LSF resources ..... 19 **Notices** ..... 21 - Trademarks ..... 23 - Privacy policy considerations ..... 23 Chapter 1. IBM Platform LSF: An Overview Introduction to IBM Platform LSF The Platform LSF ("LSF", short for load sharing facility) software is industry-leading enterprise-class software that distributes work across existing heterogeneous IT resources to create a shared, scalable, and fault-tolerant infrastructure, that delivers faster, more reliable workload performance and reduces cost. LSF balances load and allocates resources, and provides access to those resources. LSF provides a resource management framework that takes your job requirements, finds the best resources to run the job, and monitors its progress. Jobs always run according to host load and site policies. Cluster A group of computers (hosts) running LSF that work together as a single unit, combining computing power, workload, and resources. A cluster provides a single-system image for a network of computing resources. Hosts can be grouped into a cluster in a number of ways. A cluster could contain: - All the hosts in a single administrative group - All the hosts on a subnetwork Hosts Hosts in your cluster perform different functions. - Master host: LSF server host that acts as the overall coordinator for the cluster, doing all job scheduling and dispatch. - Server host: A host that submits and runs jobs. - Client host: A host that only submits jobs and tasks. - Execution host: A host that runs jobs and tasks. - Submission host: A host from which jobs and tasks are submitted. Job A unit of work that is running in the LSF system. A job is a command that is submitted to LSF for execution. LSF schedules, controls, and tracks the job according to configured policies. Jobs can be complex problems, simulation scenarios, extensive calculations, or anything that needs compute power. Job slot A job slot is a bucket into which a single unit of work is assigned in the LSF system. Hosts can be configured with multiple job slots and you can dispatch jobs from queues until all the job slots are filled. You can correlate job slots with the total number of CPUs in the cluster. Queue A cluster-wide container for jobs. All jobs wait in queues until they are scheduled and dispatched to hosts. Queues do not correspond to individual hosts; each queue can use all server hosts in the cluster, or a configured subset of the server hosts. When you submit a job to a queue, you do not need to specify an execution host. LSF dispatches the job to the best available execution host in the cluster to run that job. Queues implement different job scheduling and control policies. Resources Resources are the objects in your cluster that are available to run work. For example, resources include but are not limited to hosts, CPU slots, and licenses. LSF cluster components An LSF cluster manages resources, accepts and schedules workload, and monitors all events. LSF can be accessed by users and administrators by a command-line interface, an API, or through the IBM Platform Application Center. IBM Platform LSF - Core: The core of LSF includes daemons and functionality that schedules and runs jobs, as well as managing resources. - IBM License Scheduler allows you to make policies that control the way software licenses are shared among different users in your organization. IBM Platform LSF License Scheduler works with FlexNet™ products to control and monitor license usage. Platform LSF Documentation Center The Platform LSF Documentation Center is your access point to LSF documentation. It is provided with the LSF installation files and once extracted it can be accessed from any web browser. It can also be linked to directly from IBM Platform Application Center. The Documentation Center provides an overview of the organization of the product documentation. It also provides easy access to each document and quick links to frequently used commands and tasks. In addition to links to all documents, the Documentation Center provides full search capabilities within the documentation. You can perform keyword searches within a document or across the full documentation set. Chapter 2. Inside an LSF Cluster LSF daemons and processes There are multiple LSF processes running on each host in the cluster. The type and number of processes running depends on whether the host is a master host or a compute host. <table> <thead> <tr> <th>LSF daemon</th> <th>Role</th> </tr> </thead> <tbody> <tr> <td>mbatchd</td> <td>Job requests and dispatch</td> </tr> <tr> <td>mbschd</td> <td>Job scheduling</td> </tr> <tr> <td>sbatchd</td> <td>Job execution</td> </tr> <tr> <td>res</td> <td>Job execution</td> </tr> <tr> <td>lim</td> <td>Host information</td> </tr> <tr> <td>pim</td> <td>Job process information</td> </tr> <tr> <td>elim</td> <td>Dynamic load indices</td> </tr> </tbody> </table> **mbatchd** Master Batch Daemon running on the master host. Responsible for the overall state of jobs in the system. Receives job submission, and information query requests. Manages jobs held in queues. Dispatches jobs to hosts as determined by mbschd. **mbschd** Master Batch Scheduler Daemon running on the master host. Works with mbatchd. Makes scheduling decisions based on job requirements, policies, and resource availability. Sends scheduling decisions to the mbatchd. **sbatchd** Slave Batch Daemon running on each server host including the master host. Receives the request to run the job from mbatchd and manages local execution of the job. Responsible for enforcing local policies and maintaining the state of jobs on the host. sbatchd forks a child sbatchd for every job. The child sbatchd runs an instance of res to create the execution environment in which the job runs. The child sbatchd exits when the job is complete. **res** Remote Execution Server (RES) running on each server host. Accepts remote execution requests to provide transparent and secure remote execution of jobs and tasks. **lim** Load Information Manager (LIM) running on each server host. Collects host load and configuration information and forwards it to the master LIM running on the master host. Reports the information displayed by lsload and lshosts. Static indices are reported when the LIM starts up or when the number of CPUs (ncpus) change. **Master lim** The LIM running on the master host. Receives load information from the LIMs running on hosts in the cluster. Forwards load information to mbatchd, which forwards this information to mbschd to support scheduling decisions. If the master LIM becomes unavailable, a LIM on a master candidate automatically takes over. **pim** Process Information Manager (PIM) running on each server host. Started by LIM, which periodically checks on PIM and restarts it if it dies. Collects information about job processes running on the host such as CPU and memory used by the job, and reports the information to sbatchd. ELIM External LIM (ELIM) is a site-definable executable that collects and tracks custom dynamic load indices. An ELIM can be a shell script or a compiled binary program, which returns the values of the dynamic resources you define. The ELIM executable must be named *elim.anything* and located in LSF_SERVERDIR. LSF cluster communications paths The communication paths between the daemons in the cluster are as shown below: Fault tolerance LSF has a robust architecture designed with fault tolerance in mind. Every component in the system has a recovery operation, so that vital components are monitored by another component and can automatically recover from a failure. LSF is designed to continue operating even if some of the hosts in the cluster are unavailable. One host in the cluster acts as the master, but if the master host becomes unavailable another master host candidate takes over. LSF is available as long as there is one available master host candidate in the cluster. LSF can tolerate the failure of any host or group of hosts in the cluster. When a host becomes unavailable, all jobs running on that host are either requeued or lost, depending on whether the job was marked as rerunnable. No other pending or running jobs are affected. How failover works Fault tolerance in LSF depends on the event log file, lsb.events, which is kept on the primary file server. Every event in the system is logged in this file, including all job submissions and job and host status changes. If the master host becomes unavailable, a new master is chosen from the master candidate list, and sbatchd on the new master starts a new mbatchd. The new mbatchd reads the lsb.events file to recover the state of the system. For sites not wanting to rely solely on a central file server for recovery information, LSF can be configured to maintain a duplicate event log by keeping a replica of lsb.events. The replica is stored on the file server, and used if the primary copy is unavailable. When using the duplicate event log function, the primary event log is stored locally on the first master host, and re-synchronized with the replicated copy when the host recovers. Host failover The LSF master host is chosen dynamically. If the current master host becomes unavailable, another host takes over automatically. The failover master host is selected from the list defined in LSF_MASTER_LIST in lsf.conf (specified in install.config at installation). The first available host in the list acts as the master. Running jobs are managed by sbatchd on each server host. When the new mbatchd starts, it polls the sbatchd on each host and finds the current status of its jobs. If sbatchd fails but the host is still running, jobs running on the host are not lost. When sbatchd is restarted it regains control of all jobs running on the host. Job failover Jobs can be submitted as rerunnable, so that they automatically run again from the beginning or as checkpointable, so that they start again from a checkpoint on another host if they are lost because of a host failure. If all of the hosts in a cluster go down, all running jobs are lost. When a master candidate host comes back up and takes over as master, it reads the lsb.events file to get the state of all batch jobs. Jobs that were running when the systems went down are assumed to have exited unless they were marked as rerunnable, and email is sent to the submitting user. Pending jobs remain in their queues, and are scheduled as hosts become available. Partitioned cluster If the cluster is partitioned by a network failure, a master LIM takes over on each side of the partition as long as there is a master host candidate on each side of the partition. Interactive load-sharing remains available as long as each host still has access to the LSF executables. Partitioned network If the network is partitioned, only one of the partitions can access lsb.events, so batch services are only available on one side of the partition. A lock file is used to make sure that only one mbatchd is running in the cluster. Job exception handling You can configure hosts and queues so that LSF detects exceptional conditions while jobs are running, and takes appropriate action automatically. You can customize what exceptions are detected and the corresponding actions. For example, you can set LSF to restart a job automatically if it exits with a specific error code. Security LSF security model By default, the LSF security model keeps track of user accounts internally. A user account defined in LSF includes a password to provide authentication and an assigned role to provide authorization, such as administrator. LSF user roles LSF, without EGO enabled, supports the following roles: - LSF user: Has permission to submit jobs to the LSF cluster and view the states of jobs and the cluster. - Primary LSF administrator: Has permission to perform clusterwide operations, change configuration files, reconfigure the cluster, and control jobs submitted by all users. Configuration files such as lsb.params and lsb.hosts configure all aspects of LSF. - LSF administrator: Has permission to perform operations that affect other LSF users. - Cluster administrator: Can perform administrative operations on all jobs and queues in the cluster. May not have permission to change LSF configuration files. - Queue administrator: Has administrative permissions limited to a specified queue. - Hostgroup administrator: Has administrative permissions limited to a specified host group. - Usergroup administrator: Has administrative permissions limited to a specified user group. **LSF user roles with EGO enabled** LSF, with EGO enabled, supports the following roles: - Cluster Administrator: Can administer any objects and workload in the cluster - Consumer Administrator: Can administer any objects and workload in consumers to which they have access - Consumer User: Can run workload in consumers to which they have access User accounts are created and managed in EGO. EGO authorizes users from its user database. **LSF user groups** LSF allows you to use any existing UNIX and Linux user groups directly by specifying a UNIX or Linux user group anywhere an LSF user group can be specified. **External authentication** LSF provides a security plug-in for sites that prefer to use external or third-party security mechanisms, such as Kerberos, LDAP, ActiveDirectory, and so on. You can create a customized eauth executable to provide external authentication of users, hosts, and daemons. Credentials are passed from an external security system. The eauth executable can also be customized to obtain credentials from an operating system or from an authentication protocol such as Kerberos. Chapter 3. Inside Workload Management Job life cycle 1. Submit a job You submit a job from an LSF client or server with the `bsub` command. If you do not specify a queue when submitting the job, the job is submitted to the default queue. Jobs are held in a queue waiting to be scheduled and have the PEND state. The job is held in a job file in the `LSF_SHAREDIR/cluster_name/logdir/info` directory, or in one of its subdirectories if `MAX_INFO_DIRS` is defined in the configuration file `lsb.params`. - Job ID: LSF assigns each job a unique job ID when you submit the job. - Job name: You can also assign an arbitrary name to the job with the `-J` option of `bsub`. Unlike the job ID, the job name is not necessarily unique. 2. Schedule the job 1. The master batch daemon (`mbatchd`) looks at jobs in the queue and sends the jobs for scheduling to the master batch scheduler (`mbschd`) at a preset time interval (defined by the parameter `JOB_SCHEDULING_INTERVAL` in the configuration file `lsb.params`). 2. `mbschd` evaluates jobs and makes scheduling decisions based on: - Job priority - Scheduling policies - Available resources 3. `mbschd` selects the best hosts where the job can run and sends its decisions back to `mbatchd`. Resource information is collected at preset time intervals by the master load information manager (LIM) from LIMs on server hosts. The master LIM communicates this information to `mbatchd`, which in turn communicates it to `mbschd` to support scheduling decisions. 3. Dispatch the job As soon as mbatchd receives scheduling decisions, it immediately dispatches the jobs to hosts. 4. Run the job The slave batch daemon (sbatchd): 1. Receives the request from mbatchd. 2. Creates a child sbatchd for the job. 3. Creates the execution environment. 4. Starts the job using a remote execution server (res). LSF copies the execution environment from the submission host to the execution host and includes the following: • Environment variables needed by the job • Working directory where the job begins running • Other system-dependent environment settings, for example: – On UNIX and Linux, resource limits and umask – On Windows, desktop and Windows root directory The job runs under the user account that submitted the job and has the status RUN. 5. Return output When a job is completed, it is assigned the DONE status if the job was completed without any problems. The job is assigned the EXIT status if errors prevented the job from completing. sbatchd communicates job information including errors and output to mbatchd. 6. Send email to client mbatchd returns the job output, job error, and job information to the submission host through email. Use the -o and -e options of bsub to send job output and errors to a file. • Job report: A job report is sent by email to the LSF client and includes: – Job information: - CPU use - Memory use - Name of the account that submitted the job – Job output – Errors Job submission On the command line, bsub is used to submit jobs and you can specify many options with bsub to modify the default behavior. Jobs must be submitted to a queue. You can also use the IBM Platform Application Center to submit jobs. Queues Queues represent a set of pending jobs, lined up in a defined order and waiting for their opportunity to use resources. Queues implement different job scheduling and control policies. Jobs enter the queue via the `bsub` command. Queues have the following attributes associated with them: - Priority - Name - Queue limits (restrictions on hosts, number of jobs, users, groups, or processors) - Standard UNIX and Linux limits: memory, swap, process, CPU - Scheduling policies - Administrators - Run conditions - Load-sharing threshold conditions - UNIX `nice(1)` value, (sets the UNIX and Linux scheduler priority) **Queue priority** Defines the order in which queues are searched to determine which job will be processed. Queues are assigned a priority by the LSF administrator, where a higher number has a higher priority. Queues are serviced by LSF in order of priority from the highest to the lowest. If multiple queues have the same priority, LSF schedules all the jobs from these queues in first-come, first-served order. **Automatic queue selection** When you submit a job, LSF considers the requirements of the job and automatically chooses a suitable queue from a list of candidate default queues. LSF selects a suitable queue according to: - User access restriction: Queues that do not allow this user to submit jobs are not considered. - Host restriction: If the job explicitly specifies a list of hosts on which the job can be run, then the selected queue must be configured to send jobs to hosts in the list. - Queue status: Closed queues are not considered. - Exclusive execution restriction: If the job requires exclusive execution, then queues that are not configured to accept exclusive jobs are not considered. - Job’s requested resources: These must be within the resource allocation limits of the selected queue. If multiple queues satisfy the above requirements, then the first queue listed in the candidate queues that satisfies the requirements is selected. **Job scheduling and dispatch** Submitted jobs wait in queues until they are scheduled and dispatched to a host for execution. When a job is submitted to LSF, many factors control when and where the job starts to run: - Active time window of the queue or hosts - Resource requirements of the job - Availability of eligible hosts - Various job slot limits - Job dependency conditions - Fairshare constraints (configured user share policies) - Load conditions **Scheduling policies** To solve diverse problems, LSF allows multiple scheduling policies in the same cluster. LSF has several queue scheduling policies such as exclusive, preemptive, fairshare, and hierarchical fairshare. - **First-come, first-served (FCFS) scheduling:** By default, jobs in a queue are dispatched in FCFS order. This means that jobs are dispatched according to their order in the queue. - **Service level agreement (SLA) scheduling:** An SLA in LSF is a “just-in-time” scheduling policy that schedules the services agreed to between LSF administrators and LSF users. The SLA scheduling policy defines how many jobs should be run from each SLA to meet the configured goals. - **Fairshare scheduling:** If you specify a fairshare scheduling policy for the queue or if host partitions have been configured, LSF dispatches jobs between users based on assigned user shares, resource usage, or other factors. - **Preemption:** You can specify desired behavior so that when two or more jobs compete for the same resources, one job preempts the other. Preemption can apply to not only job slots, but also to advance reservation (reserving hosts for particular jobs) and licenses (using IBM Platform License Scheduler). - **Backfill:** Allows small jobs to run on job slots reserved for other jobs, provided the backfilling job completes before the reservation time expires and resource usage is due. **Scheduling and dispatch** Jobs are scheduled at regular intervals (5 seconds by default). Once jobs are scheduled, they can be immediately dispatched to hosts. To prevent overloading any host, by default LSF waits a short time between dispatching jobs to the same host. **Dispatch order** Jobs are not necessarily dispatched in order of submission. Each queue has a priority number set by the LSF Administrator when the queue is defined. LSF tries to start jobs from the highest priority queue first. LSF considers jobs for dispatch in the following order: - For each queue, from highest to lowest priority. If multiple queues have the same priority, LSF schedules all the jobs from these queues in first-come, first-served order. - For each job in the queue, according to FCFS order. - If any host is eligible to run this job, start the job on the best eligible host, and mark that host ineligible to start any other job until JOB_ACCEPT_INTERVAL has passed. Host selection Each time LSF attempts to dispatch a job, it checks to see which hosts are eligible to run the job. A number of conditions determine whether a host is eligible: - Host dispatch windows - Resource requirements of the job - Resource requirements of the queue - Host list of the queue - Host load levels - Job slot limits of the host - User quota and user limits A host is only eligible to run a job if all the conditions are met. If a job is queued and there is an eligible host for that job, the job is placed on that host. If more than one host is eligible, the job is started on the best host based on both the job and the queue resource requirements. Host load levels A host is available if the values of the load indices (such as r1m, pg, mem) of the host are within the configured scheduling thresholds. There are two sets of scheduling thresholds: host and queue. If any load index on the host exceeds the corresponding host threshold or queue threshold, the host is not eligible to run any job. Eligible hosts When LSF tries to place a job, it obtains current load information for all hosts. The load levels on each host are compared to the scheduling thresholds configured for that host in the Host section of lsb.hosts, as well as the per-queue scheduling thresholds configured in lsb.queues. If any load index exceeds either its per-queue or its per-host scheduling threshold, no new job is started on that host. Job execution environment When LSF runs your jobs, it tries to make it as transparent to the user as possible. LSF copies the environment from the submission host to the execution host. The execution environment includes the following: - Environment variables needed by the job - Working directory where the job begins running - Other system-dependent environment settings; for example, resource usage limits Shared user directories To provide transparent remote execution, LSF commands determine the user’s current working directory and use that directory on the remote host. Executables and the PATH environment variable Search paths for executables (the PATH environment variable) are passed to the remote execution host unchanged. **Note:** In mixed clusters, LSF works best when the user binary directories have the same path names on different host types. This makes the PATH variable valid on all hosts. For easy administration, LSF configuration files are stored in a shared directory. Chapter 4. LSF with EGO Enabled EGO component overview EGO can be enabled with LSF to provide a system infrastructure to control and manage cluster resources. Just as an operating system running on a single machine aggregates and virtualizes physical resources and allocates them to applications, EGO performs similar functions, but across a distributed environment. EGO manages both logical and physical resources and supports all forms of applications. EGO manages the supply of resources, making them available to applications. Hosts can be divided into two groups: management hosts and compute hosts. Management hosts provide specialized services to the cluster, while compute hosts run user workload. Management hosts Management hosts provide both cluster and workload management services within the cluster, and are not expected to run workload for users. The master host, all master candidate hosts, and session manager hosts must be management hosts. Other management hosts include the host running the data loaders and data purger for the reporting feature. Management hosts all run on the same operating system: all Windows or all UNIX. Master host The master host is the first host installed in the cluster. The resource manager (vemkd) for the cluster resides on this host. The master host controls the rest of the hosts in the cluster and is the interface to the clients of the cluster. Master candidates There is only one master host at a time. If the master host should fail, another host automatically takes over the master host role. Hosts that can act as the master are called master candidates. Session manager host One or more management hosts run session managers. There is one session manager per available slot on a management host. There is one session manager per application. Compute hosts Compute hosts are those hosts in the cluster that provide computing resources to consumers. A cluster may contain any number of compute hosts, but must have at least one compute host. CPU slots A CPU slot is the unit used to measure compute resources. A single CPU slot can run one service instance on a compute host, or one session manager on a management host. **Daemons** - VEMKD: The VEM kernel daemon that runs on the master host. It starts other daemons and responds to allocation requests - EGOSC: The EGO service controller requests appropriate resources from the VEMKD and controls service instances. - PEM: Process execution manager works for the VEMKD, starting, controlling, and monitoring activities, as well as collecting and sending run time resource usage. **Resources** Resources are physical and logical entities that are used by applications in order to run. While resource is a generic term, and can include low-level things such as shared memory segments or semaphores, in LSF, EGO manages CPU slots. A resource of a particular type has attributes. For example, a compute host has the attributes of memory, CPU utilization, operating system type, and so on. **Resource groups** Resources may be grouped together into logical groups to simplify identification, resource allocation, or for administration and monitoring purposes. These resource groups are used to provide a consumer with a like group of hosts to run workload—any host in a resource group should be able to run the same workload. As shown in Figure 1, there are two resource groups out of the box: • ManagementHosts • ComputeHosts If all of your hosts are identical, these resource groups may suffice. If your application requires a specific type of hosts (for example, with a minimum processor speed), and not all hosts meet this criteria, you likely need to create resource groups to group like hosts together. For example, a simple way to group resources may be to group your hosts by operating system type. EGO provides a common grouping mechanism for resources. Resources may come and go from the system, so EGO supports dynamic membership in a resource group. Hosts can be placed explicitly into individual resource groups, or the resource groups can be defined to have a dynamic membership based on specific criteria. This criteria includes operating system type, CPU speed, total memory, or swap configuration, or custom attributes. Sharing of LSF resources LSF resources are shared as defined in the resource distribution plan. LSF requests resources from the EGO resource manager. Based on the values specified in the resource distribution plan, the resource manager returns the number of available slots \( m \) and the names of the hosts on which the slots reside. Notices This information was developed for products and services offered in the U.S.A. IBM® may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan Ltd. 19-21, Nihonbashi-Hakozakicho, Chuo-ku Tokyo 103-8510, Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation Intellectual Property Law Mail Station P300 2455 South Road, Poughkeepsie, NY 12601-5400 USA Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurement may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs. Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows: © (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. © Copyright IBM Corp. _enter the year or years_. If you are viewing this information softcopy, the photographs and color illustrations may not appear. ### Trademarks IBM, the IBM logo, and ibm.com® are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at [http://www.ibm.com/legal/copytrade.shtml](http://www.ibm.com/legal/copytrade.shtml). Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. LSF®, Platform, and Platform Computing are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. ### Privacy policy considerations IBM Software products, including software as a service solutions, (“Software Offerings”) may use cookies or other technologies to collect product usage information, to help improve the end user experience, to tailor interactions with the end user or for other purposes. In many cases no personally identifiable information is collected by the Software Offerings. Some of our Software Offerings can help enable you to collect personally identifiable information. If this Software Offering uses cookies to collect personally identifiable information, specific information about this offering’s use of cookies is set forth below. This Software Offering does not use cookies or other technologies to collect personally identifiable information. If the configurations deployed for this Software Offering provide you as customer the ability to collect personally identifiable information from end users via cookies and other technologies, you should seek your own legal advice about any laws applicable to such data collection, including any requirements for notice and consent. For more information about the use of various technologies, including cookies, for these purposes, See IBM’s Privacy Policy at [http://www.ibm.com/privacy](http://www.ibm.com/privacy) and IBM’s Online Privacy Statement at [http://www.ibm.com/privacy/details](http://www.ibm.com/privacy/details) the section entitled “Cookies, Web Beacons and Other Technologies” and the “IBM Software Products and Software-as-a-Service Privacy Statement” at
{"Source-Url": "https://support.sas.com/rnd/scalability/platform/PSS9.1/lsf9.1.3_foundations.pdf", "len_cl100k_base": 8255, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 64394, "total-output-tokens": 9608, "length": "2e13", "weborganizer": {"__label__adult": 0.0003077983856201172, "__label__art_design": 0.000400543212890625, "__label__crime_law": 0.0004992485046386719, "__label__education_jobs": 0.0015707015991210938, "__label__entertainment": 0.00012493133544921875, "__label__fashion_beauty": 0.00012505054473876953, "__label__finance_business": 0.006259918212890625, "__label__food_dining": 0.0001832246780395508, "__label__games": 0.0011148452758789062, "__label__hardware": 0.0029773712158203125, "__label__health": 0.00021564960479736328, "__label__history": 0.0002137422561645508, "__label__home_hobbies": 0.00016307830810546875, "__label__industrial": 0.0007829666137695312, "__label__literature": 0.000209808349609375, "__label__politics": 0.00018680095672607425, "__label__religion": 0.00029397010803222656, "__label__science_tech": 0.0290679931640625, "__label__social_life": 8.70823860168457e-05, "__label__software": 0.33203125, "__label__software_dev": 0.62255859375, "__label__sports_fitness": 0.0002114772796630859, "__label__transportation": 0.0003402233123779297, "__label__travel": 0.00018775463104248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39572, 0.00898]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39572, 0.13327]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39572, 0.90635]], "google_gemma-3-12b-it_contains_pii": [[0, 53, false], [53, 53, null], [53, 106, null], [106, 1287, null], [1287, 1891, null], [1891, 1891, null], [1891, 2958, null], [2958, 4886, null], [4886, 5766, null], [5766, 5979, null], [5979, 6691, null], [6691, 8645, null], [8645, 9906, null], [9906, 12725, null], [12725, 13851, null], [13851, 15402, null], [15402, 16920, null], [16920, 18662, null], [18662, 20957, null], [20957, 23501, null], [23501, 25529, null], [25529, 25950, null], [25950, 27977, null], [27977, 29373, null], [29373, 30558, null], [30558, 30558, null], [30558, 32967, null], [32967, 35796, null], [35796, 38433, null], [38433, 39572, null], [39572, 39572, null], [39572, 39572, null]], "google_gemma-3-12b-it_is_public_document": [[0, 53, true], [53, 53, null], [53, 106, null], [106, 1287, null], [1287, 1891, null], [1891, 1891, null], [1891, 2958, null], [2958, 4886, null], [4886, 5766, null], [5766, 5979, null], [5979, 6691, null], [6691, 8645, null], [8645, 9906, null], [9906, 12725, null], [12725, 13851, null], [13851, 15402, null], [15402, 16920, null], [16920, 18662, null], [18662, 20957, null], [20957, 23501, null], [23501, 25529, null], [25529, 25950, null], [25950, 27977, null], [27977, 29373, null], [29373, 30558, null], [30558, 30558, null], [30558, 32967, null], [32967, 35796, null], [35796, 38433, null], [38433, 39572, null], [39572, 39572, null], [39572, 39572, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39572, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39572, null]], "pdf_page_numbers": [[0, 53, 1], [53, 53, 2], [53, 106, 3], [106, 1287, 4], [1287, 1891, 5], [1891, 1891, 6], [1891, 2958, 7], [2958, 4886, 8], [4886, 5766, 9], [5766, 5979, 10], [5979, 6691, 11], [6691, 8645, 12], [8645, 9906, 13], [9906, 12725, 14], [12725, 13851, 15], [13851, 15402, 16], [15402, 16920, 17], [16920, 18662, 18], [18662, 20957, 19], [20957, 23501, 20], [23501, 25529, 21], [25529, 25950, 22], [25950, 27977, 23], [27977, 29373, 24], [29373, 30558, 25], [30558, 30558, 26], [30558, 32967, 27], [32967, 35796, 28], [35796, 38433, 29], [38433, 39572, 30], [39572, 39572, 31], [39572, 39572, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39572, 0.02163]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
716d5db483565d3d1450ad74a38612b5ec34aa3a
Code Generation in the Polyhedral Model Is Easier Than You Think Cédric Bastoul To cite this version: Cédric Bastoul. Code Generation in the Polyhedral Model Is Easier Than You Think. 2004, pp.7–16. hal-00017260 HAL Id: hal-00017260 https://hal.archives-ouvertes.fr/hal-00017260 Submitted on 18 Jan 2006 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract Many advances in automatic parallelization and optimization have been achieved through the polyhedral model. It has been extensively shown that this computational model provides convenient abstractions to reason about and apply program transformations. Nevertheless, the complexity of code generation has long been a deterrent for using polyhedral representation in optimizing compilers. First, code generators have a hard time coping with generated code size and control overhead that may spoil theoretical benefits achieved by the transformations. Second, this step is usually time consuming, hampering the integration of the polyhedral framework in production compilers or feedback-directed, iterative optimization schemes. Moreover, current code generation algorithms only cover a restrictive set of possible transformation functions. This paper discusses a general transformation framework able to deal with non-unimodular, non-invertible, non-integral or even non-uniform functions. It presents several improvements to a state-of-the-art code generation algorithm. Two directions are explored: generated code size and code generator efficiency. Experimental evidence proves the ability of the improved method to handle real-life problems. 1. Introduction Usual compiler intermediate representations like abstract syntax trees are not appropriate for complex program restructuring. Simple optimizations e.g. constant folding or scalar replacement may be achieved without hard modifications of such stiff data structures. But more complex transformations such as loop inversion, skewing, tiling etc. modify the execution order and this is far away from the syntax. A model based on a linear-algebraic representation of programs and transformations emerged in the Eighties to address this issue: the polyhedral (or polytope) model. This model became very popular because of its rich mathematical theory and its intuitive geometric interpretation. Moreover, it addresses a class of codes with very regular control that includes a large range of real-life program parts [3]. The polyhedral framework is basically a plugin to the conventional compilation process. It starts from the abstract syntax tree by translating the program parts that fit the model into the linear-algebraic representation. The next step is to select a new execution order by using a reordering function (a schedule, or a placement, or a chunking function). Finding suitable execution orders has been the subject of most of the research on the polyhedral model [4, 5, 9, 12, 13, 20, 22, 24, 27]. Lastly, the code generation step returns back to an abstract syntax tree or to a new source code implementing the execution order implied by the reordering function. Up to now, the polyhedral model failed to integrate production compilers. Main reasons touch on the code generation step. Firstly, most algorithms require severe limitations on the reordering functions (e.g. to be unimodular or invertible) which reduce the opportunities of the optimization techniques to find efficient solutions. Next, simple-minded schemes for loop building may generate large and/or inefficient codes which can offset the optimization they are enabling. Finally, the complexity of the problem is challenging for real-life programs and hampers the integration of the framework in iterative optimization schemes. In this paper, we will show how it is possible to handle very general transformations in the polyhedral model and that starting from one of the best algorithms known so far [21], how we can improve it for producing in a reasonable amount of time an efficient target code with a limited size growing. The paper is organized as follows. Section 2 introduces the polyhedral model formally. Section 3 presents a general program transformation framework within this model. Section 4 describes the code generation algorithm and proposes new ways to achieve quickly an efficient, small target code. In section 5, experimental results obtained through the algorithm implementation are shown. Finally, section 6 dis- 2. Background and Notations The polyhedral model is a representation of both sequential and parallel programs. It corresponds to a subset of imperative languages like C or FORTRAN known as static control programs [11]. This class includes a large range of programs which are discussed in depth by Xue [28]. Their properties can be roughly summarized in this way: (1) control statements are do loops with affine bounds and if conditions with affine conditions (in fact control can be more complex, see [28]); (2) affine bounds and conditions depend only on outer loop counters and constant parameters. A maximal set of consecutive statements with static control in any program is called a static control part (SCoP) [3]. The kernel in Figure 1 is an example of strict acceptance of the static control restrictions and will be used for illustrating further concepts. The loops in such an imperative language can be represented using n-entry column vectors called iteration vectors: $\vec{x} = (i_1, i_2, \ldots, i_n)$, where $i_k$ is the $k^{th}$ loop index and $n$ is the innermost loop. Considering the static control class, the program execution can be fully described by using two specifications for each statement instance: $\vec{x}$ is a constant vector, possibly parametric. Depending on the context, the scattering function may have several interpretations: to distribute the iterations in space, i.e. across different processors, to order them in time, or both (by composition), etc. In the case of space-mapping, the number returned by the function for a given statement instance is the number of the processor where it has to be executed. In an $n$-dimensional time-schedule, the statement instance with the logical date $(a_1 \ldots a_n)$ is executed before those with associated date $(b_1 \ldots b_n)$ iff $\exists i, 1 \leq i < n, (a_1 \ldots a_i) = (b_1 \ldots b_i) \land a_{i+1} < b_{i+1}$, i.e. they follow the lexicographic order. For instance we can easily capture the sequential execution order of any static control program with scheduling functions by using the abstract syntax tree of this program [12]: we can read directly such functions for the program in Figure 1 on the AST shown in Figure 2, e.g. $\theta_{S1}(\vec{x}_{S1}) = (0, i, 0)^T$, $\theta_{S2}(\vec{x}_{S2}) = (0, i, 1, j, 0)^T$, $\theta_{S3}(\vec{x}_{S3}) = (0, i, 2)^T$ etc. 3. Program Transformations Program transformations in the polyhedral model can be specified by well chosen scattering functions. They modify the source polyhedra into target polyhedra containing the same points but in a new coordinate system, thus with a new lexicographic order. Implementing these transformations is the central part of the polyhedral framework. The current polyhedral code generation algorithms lack flexibility by addressing only a subset of the possible functions. How to Thus, from each polyhedron \( D \) we obtain \( S \) polyhedra by adding new dimensions in leading positions. A change of basis of the original polyhedron to the target transformation functions because we do not try to perform a very high control overhead. A square invertible extension of the transformation posed the first relaxation of the invertibility constraint, by considering e.g. to be unimodular [1, 17] (the model required severe limitations on the scattering functions and show how it is possible to handle them in this framework. 3.1. Affine Transformations Previous work on code generation in the polyhedral model required severe limitations on the scattering functions, e.g. to be unimodular [1, 17] (the \( T \) matrix has to be square and has determinant \( \pm 1 \)) or at least to be invertible [20, 27, 22, 5]. The underlying reason was, considering an original polyhedron defined by \( A \vec{x} \geq c \) and the scattering function leading to the target index \( \vec{y} = T \vec{x} \), the polyhedron in the new coordinate system is defined by \( (AT^{-1}) \vec{y} \geq \vec{c} \), a change of basis. Griebl et al. proposed the first relaxation of the invertibility constraint, by using a square invertible extension of the transformation matrix [14]. Unfortunately their method led practically to a very high control overhead. In this paper we do not impose any constraint on the transformation functions because we do not try to perform a change of basis of the original polyhedron to the target index. Instead, we apply a new lexicographic order to the polyhedra by adding new dimensions in leading positions. Thus, from each polyhedron \( D \) and scattering function \( \theta \), it is possible to build another polyhedron \( T \) with the appropriate lexicographic order: \[ T = \left\{ \left( \frac{\vec{y}}{\vec{x}} \right) \left| \begin{array}{c} I d \ -T \\ 0 \quad A \end{array} \right( \frac{\vec{y}}{\vec{x}} \right) \geq \left( \frac{\vec{c}}{\vec{c}} \right) \right\}, \] where by definition, \( (\vec{y}, \vec{x}) \in T \) if and only if \( \vec{y} = \theta(\vec{x}) \). The points inside the new polyhedron are ordered lexicographically until the last dimension of \( \vec{y} \). By using such a transformation policy, the data of both original iteration domains and transformations are included in the new polyhedron. As an illustration, let us consider the polyhedron \( D_{S2} \) in Figure 3(a) and the scattering function \( \theta_{S2}(i, j) = 2i + j \). The corresponding scattering matrix \( T = \begin{bmatrix} 2 & 1 \end{bmatrix} \) is not invertible, but it can be extended to \( T = \begin{bmatrix} 2 & 1 \end{bmatrix} \) as suggested by Griebl et al. [14]. The usual resulting polyhedron is shown in Figure 3(b). Our policy leads directly to the polyhedron in Figure 3(c), provided we choose the lexicographic order for the free dimensions. A projection onto \( \vec{i}' \) and \( i \) would lead to the result in Figure 3(b). The additional dimensions carry the transformation data, i.e. in this case \( j = \vec{i}' - 2i \). This is helpful since during code generation we have to update the references to the iterators in the loop body, and necessary when the transformation is not invertible. Another property of this transformation policy is never to build rational target constraint systems. Most previous works were challenged by this problem, which occurs when the transformation function is non-unimodular. We can observe the phenomenon in Figure 3(b). The integer points without heavy dots have no images in the original polyhedron. The original coordinates can be determined from the target ones by \( \text{original} = T^{-1} \text{target} \). Because \( T \) is non-unimodular, \( T^{-1} \) has rational elements. Thus some integer target points have a rational image in the original space; they are called holes. To avoid considering the holes, the strides (the steps between the integral points to consider) had to be found. Many works proposed to use the Hermite Normal Form [23] in different ways to solve the problem [20, 27, 9, 22]. In the opposite, we do not change the basis of the original polyhedra, but we only apply an appro- ![Figure 3. Transformation policies for \( D_{S2} \) in Figure 1 with \( \theta_{S2}(i, j) = 2i + j \)](image) priate lexicographic order. As a consequence, our target systems are always integral and there are no holes in the corresponding polyhedra. The stride informations are explicitly contained in the constraint systems in the form of equations. The cost of this method is to add new dimensions to the polyhedra. This can be a relevant issue since first, it increases the complexity of the scanning step and second, it increases the constraint system size while high-level code generation typically requires a lot of memory. In practice processing of additional dimensions is often trivial with the method presented in section 4. Eventually, our prototype is more efficient and needs less memory than those based on other methods (see section 5). 3.2. Rational Transformations Some automatic allocators or schedulers ask for rational transformations [12]. Thus scattering functions can have a more general shape: $$\theta(\vec{x}) = \frac{(T\vec{x} + \vec{i})}{\vec{d}},$$ where \(\div\) means integer division and \(\vec{d}\) is a constant vector such that each element divides the corresponding dimension of \(\theta(\vec{x})\). In practice, divisors often correspond to resource constraints (e.g. the number of processors, of functional units etc.). Wetzel proposed the first solution to solve this problem, but only for one divisor value for the whole scheduling function, and leading to a complex control [25]. Again, we propose to add dimensions to solve the problem. For each rational element in \((T\vec{x})/\vec{d}\), we introduce an auxiliary variable standing for the quotient of the division. For instance let us consider the original polyhedron in Figure 4(a) and the scheduling function \(\theta(i) = i/3 + 1\). We introduce \(q\) and \(r\) such as \(i = 3q + r\), with by definition \(0 \leq r = i - 3q \leq 2\). Then we can deal with an equivalent integral transformation \(\theta'(q) = q + 1\) with \(0 \leq i - 3q \leq 2\). This amounts to strip-mine the dimension \(i\), as shown in Figure 4(b). With several non-integer coefficients, we just need more auxiliary variables standing for the result of the divisions. 3.3. Non-Uniform Transformations As the power of program analysis increased with time, program transformations became more and more complex in order to face new optimization opportunities. Starting from simple transformation for a single loop nest, they evolved to statement-wise functions and more recently to several transformations per statement, each of them applying to a subset of the iteration domain. Thus a scattering function for a statement with the iteration domain \(D\) may be of the following form: $$\theta(\vec{x}) = \begin{cases} \text{if } \vec{x} \in D_1 \text{ then } T_1\vec{x} + \vec{t}_1 \\ \text{if } \vec{x} \in D_2 \text{ then } T_2\vec{x} + \vec{t}_2 \\ \ldots \\ \text{if } \vec{x} \in D_n \text{ then } T_n\vec{x} + \vec{t}_n \end{cases}$$ where the \(D_i\), \(1 \leq i \leq n\) are a partition of \(D\). It is quite simple to handle such transformations, at least when the code generator deals efficiently with more than one polyhedron, by explicitly splitting the considered polyhedra into partitions. When the iteration domain is split using affine conditions, as in index set splitting [13], building the partition is trivial, but more general partitions with non-affine criteria are possible as long as we can express each subset as a polyhedron. For instance, Slama et al. found programs where the best parallelization requires non-uniform transformations, e.g. \(\theta(i) = (i \mod d = n) \text{ then } \ldots \text{ else } \ldots\) where \(d\) is a scalar value and \(n\) a constant possibly parametric. They propose a code generation scheme dedicated to this problem [24]. It is possible to handle this in our framework by adding new dimensions. For instance the iteration domain corresponding to the then part of \(\theta(i)\) would be the original one with the additional constraint \(i = jd + n\), while the additional constraints for the else part could be \(i \leq jd + n - 1\) and \(i \geq jd + n + 1 - d\). Then we can apply the transformations to the resulting polyhedra as shown in section 3.1. ![Figure 4. Rational reordering $\theta(i) = i/3 + 1$](image-url) 4. Scanning Polyhedra We showed in previous sections that any static control programs can be specified using a set of iteration domains and scattering functions that can be merged to create new polyhedra with the appropriate lexicographic order. Generating code in the polyhedral model amounts to finding a set of nested loops visiting each integral point of each polyhedra, once and only once, following this order. This is a critical step in the framework since the final program effectiveness highly depends on the target code quality. In particular, we must ensure that a bad control management does not spoil performance, for instance by producing redundant conditions, complex loop bounds or under-used iterations. On the other hand, we have to avoid code explosion for instance because a large code may pollute the instruction cache. At present, the Quilleré et al. method give the best results when we have to generate a scanning code for several polyhedra [21, 2]. This technique is guaranteed to avoid redundant control while scanning the scattering dimensions. However, it suffers from some limitations, e.g. high complexity and needless code explosion. In the following, we propose some solutions to these drawbacks. We present the general algorithm with some adaptations to our purpose in section 4.1. We address the problem of reducing the code size without consequence on code efficiency in section 4.2. Finally in section 4.3 we discuss complexity issues. 4.1. Extended Quilleré et al. Algorithm Quilleré et al. proposed recently the first code generation algorithm building the target code without redundant control directly instead of starting from a naive code and trying to improve it [21]. As a consequence, this method never fail to remove a guard and the processing is easier. Eventually it generates a better code more efficiently. The algorithm rely on polyhedral operations that can be implemented by e.g. PolyLib1 [26]. The basic mechanism is, starting from the list of polyhedra to scan, to recursively generate each level of the abstract syntax tree of the scanning code (AST). The nodes of the AST are labelled with a polyhedron \( \mathcal{T} \) and have a list of children (notation \( \mathcal{T} \rightarrow (...) \)). The leaves are labelled with a polyhedron and a statement (notation \( \mathcal{T}_S \)). Each recursion builds an AST node list as described by the algorithm in Figure 5. It starts with the following input: (1) the list of transformed polyhedra to be scanned \( (\mathcal{T}_{S_1}, ..., \mathcal{T}_{S_n}); \) (2) the context, i.e. the set of constraints on the global parameters; (3) the first dimension \( d = 1 \). Generating the code from the AST is a trivial step: the constraint system labelling each node can be directly translated as loop bounds and as surrounding conditional, respectively if the constraints concern the dimension corresponding to the node level or not. This algorithm is somewhat different from the one presented by Quilleré et al. in [21] and its improved version in [2]; our two main contributions are the following: reducing the code size without degrading code performance (step 7) and reduction of the code generation processing time by using pattern matching (step 3). We propose to illustrate this algorithm (without step 7) through the example in Figure 6. We have to generate the scanning code for the three polyhedra in Figure 6(a). For the sake of simplicity, we will show directly the translations of the node constraint systems into source code. We first compute the intersections with the context (i.e., at this point, the constraints on the parameters, supposed to be \( n \geq 2 \) and \( m \geq n \)). We project the polyhedra onto the first dimension, \( i \), then we separate them into disjoint polyhedra. As shown in Figure 6(b) this results in two disjoint polyhedra. We can now generate the scanning code for this first dimension. Then we recurse on the next dimension, repeating the process for each polyhedron list (in this example, there are now two lists: one inside each generated outer loop). We intersect each polyhedron with the new context, i.e. the outer loop iteration domains; then we project the resulting polyhedra onto the outer dimensions. Finally we separate these projections into disjoint polyhedra. This last process is trivial for the second list but yields several domains for the first list, as shown in Figure 6(c). Then we generate the code associated with the new dimension, and since this is the last one, a scanning code is fully generated. Lastly, we remove dead code (for instance in the first loop nest in Figure 6(c), the iteration \( i = n \) is useful only for a small part of the loop body) by applying a new projection step during the recursion backtrack. The final code is shown in Figure 6(d). 4.2. Reducing code size The power of optimizing methods in the polyhedral model are of a particular interest for embedded system compiling. One of the main constraint for such applications is the object code size because of inherent hardware limitations. Generated code size may be under control for this purpose or simply to avoid instruction cache pollution. It is possible to manage it easily with iterative code generation methods [15]: they start from a naive (inefficient) and short code and eliminate the control overhead by selecting conditions to remove and performing code hoisting (splitting the code on the chosen condition and copying the original guard code in the two branches). Thus, to stop code hoisting stops code growing. With recursive code generation methods as discussed in this paper, it is always possible to choose not to separate the polyhedra and to generate --- 1 PolyLib is available at http://icps.u-strasbg.fr/PolyLib a smaller code with conditions [21]. These techniques always operate at the price of a less efficient generated code. This section presents another way, with a quite small impact on control overhead and a possibly significant code size improvement. It is based on a simple observation: separating polyhedra often results in isolating some points, while this is not always necessary. Figure 6 shows a dramatic example of this phenomenon (hoisting-based code generators as the Omega CodeGen have to meet the same issue). Integrating these points inside host polyhedra reduces the code size by adding new iterations. The problem was first pointed out by Bouchebaba in the particular case of 2-dimensional loop nest fusion [4]. He extracted the 14 situations where a vertex should not be fused with a loop for his purpose and apply the fusion in the other cases. In the following is presented a solution for general code generation based on the properties of the code construction algorithm in Figure 5. To ensure that the separation step will not result in needless polyhedron peeling, it is necessary to compute this separation. In addition we have to achieve the recursion on every dimensions since the projection hide some of them during the separation process. Thus, we can remove isolated points at the end of each recursion (step 7). At a given depth of the recursion, the removing process is applied for each loop node in the list (i.e. such that the dimension corresponding to the current depth is not constant): 1. Define the point candidate to merge with the node: scan the node branch in depth first order and build the list of statements in the leaves. The statement candidate has to fit this statement list since it is guaranteed after dead code elimination that each statement in the leaves is executed at least once. Thus only a point with this structure may be merged with the node. 2. Check if such a point directly precedes or follows the node in the lexicographic ordering graph built in step 4 and 6 (details on this graph construction can be found in [21]). This graph is only based on the projecting dimensions, however if a point candidate directly follows the node in the ordering graph and cannot be merged, this means that an input polyhedron is not convex, a contradiction. 3. Merge the point candidates with the node if the previous test was a success by using a polyhedral union, and remove the points from the list of polyhedra. We can apply this process to the example in Figure 6. The translation of the AST after dead code removing for the di- --- **Figure 5. Extended Quilleré et al. Algorithm** | CodeGeneration: Build a polyhedra scanning code without redundant control AST. | | **Input:** a polyhedron list \((T_{S_1}, ..., T_{S_n})\), a context \(C\), the current dimension \(d\). | | **Output:** the abstract syntax tree of the code scanning the polyhedra inside the input list. | 1. Intersect each polyhedron \(T_{S_i}\) in the list with the context \(C\) in order to restrict the domain (and subsequently the code that will be generated) under the context of the surrounding loop nest. 2. Compute for each resulting polyhedron \(T_{S_i}\) its projection \(P_i\) onto the outermost \(d\) dimensions and consider the new list of \(P_i \rightarrow T_{S_i}\). 3. Separate the projections into a new list of disjoint polyhedra: given a list of \(m\) polyhedra, start with the first two polyhedra \(P_1 \rightarrow T_{S_1}\) and \(P_2 \rightarrow T_{S_2}\) by computing \((P_1 \cap P_2) \rightarrow T_{S_2}\) (i.e. \(S_1\) alone), \((P_1 \cap P_2) \rightarrow (T_{S_1}, T_{S_2})\) (i.e. \(S_1\) and \(S_2\)) and \((P_2 - P_1) \rightarrow T_{S_2}\) (i.e. \(S_2\) alone), then for the three resulting polyhedra, make the same separation with \(P_3 \rightarrow T_{S_3}\) and so on. 4. Build the lexicographic ordering graph where there is an edge from a polyhedron \(P_i \rightarrow T_{S_i}\) to another polyhedron \(P_j \rightarrow (T_{S_p}, ..., T_{S_q})\) if its scanning code has to precede the other to respect the lexicographic order, then sort the list according to a valid order. 5. For each polyhedron \(P \rightarrow (T_{S_p}, ..., T_{S_q})\) in the list: - (a) Compute the stride that the inner dimensions impose to the current one, and find the lower bound by looking for stride constraints in the \((T_{S_p}, ..., T_{S_q})\) list. - (b) While there is a polyhedron in \((T_{S_p}, ..., T_{S_q})\): - i. Merge successive polyhedra with another dimension to scan in a new list. - ii. Recurse for the new list with the new loop context \(C \cap P\) and the next dimension \(d+1\). 6. For each polyhedron \(P \rightarrow (\text{inside})\) in the list, apply steps 2 to 4 of the algorithm to the \(\text{inside}\) list in order to remove dead code. Then consider the concatenation of the resulting lists as the new list. 7. Make all the possible unions of host polyhedra with point polyhedra to reduce code size. 8. Return the polyhedron list. Operation of S1 Operation of S2 Operation of S3 \begin{align*} T_{S_1}: \quad & 1 \leq i \leq n \\ & j = i \\ T_{S_2}: \quad & 1 \leq i \leq n \\ & i \leq j \leq n \\ T_{S_3}: \quad & 1 \leq i \leq m \\ & j = n \\ \text{do } i=n+1, m \\ T_{S_1}: \quad & n+1 \leq i \leq m \\ & j = n \\ \end{align*} \begin{align*} T_{S_1}: \quad & 1 \leq i \leq n \\ & j = i \\ S_1(j=n) \\ S_2(j=n) \\ S_3(j=n) \\ \text{do } i=n+1, m \\ S_3(j=n) \\ \end{align*} \begin{align*} T_{S_1}: \quad & 1 \leq i \leq n \\ T_{S_2}: \quad & 1 \leq i \leq n \\ & i \leq j \leq n \\ T_{S_3}: \quad & 1 \leq i \leq n \\ & j = n \\ \text{do } i=n+1, m \\ T_{S_1}: \quad & n+1 \leq i \leq m \\ & j = n \\ S_1(j=n) \\ S_2(j=n) \\ S_3(j=n) \\ \text{do } i=n+1, m \\ S_3(j=n) \\ \end{align*} \begin{align*} \text{do } i=1, n \\ \text{if } (i=n) \text{ then} \\ S_1(j=n) \\ S_2(j=n) \\ S_3(j=n) \\ \text{do } j=i+1, n-1 \\ S_2 \\ S_3(j=n) \\ \text{if } (i<n-1) \text{ then} \\ S_2(j=i) \\ \text{do } j=i+1, n-1 \\ S_2 \\ S_3(j=n) \\ \text{do } i=n+1, m \\ S_3(j=n) \\ \end{align*} (a) Initial domains to scan (b) Projection and separation onto the first dimension (c) Recursion on next dimension (d) Backtrack with dead code removing Figure 6. Step by step code generation example Dimension \( j \) is equivalent to the code in Figure 6(c). The statement candidate for the \( j \) loop is \( S_2 \). We can merge both \( S_2 \) points before and after this loop. Then the dead code removing for dimension \( i \) would only isolate the point corresponding to \( i = n \), the new candidate would be \( S_1 S_2 S_3 \). It can be merged and the final code is shown in Figure 7 with an object code size of 176B while the previous one in Figure 6(d) is 464B (each statement is a 2-dimensional array entry increment). Figure 7. Compacted code of Figure 6(d) 4.3. Complexity Issues The main computing kernel in the code generation process is the separation into disjoint polyhedra (step 3). Given a list of \( n \) polyhedra, the worst-case complexity is \( O(3^n) \) polyhedral operations (exponential themselves). In addition, the memory usage is very high since we have to allocate memory for each separated domain. For both issues, we propose a partial solution. We use pattern matching to reduce the number of polyhedral computations: at a given depth, the domains are often the same (this is a property of the input codes, this happens for 17% of the operations in the benchmark set presented in section 5), or disjoint (this is a property of the scheduling matrices, this happens for 36% of the operations in the benchmark set of section 5). Thus we check quickly for these properties before any polyhedral operation by comparing directly the elements of the constraint systems (this allows to find 75% of the equalities), and by comparing the unknowns having fixed values (this allows to find 94% of the disjunctions). When one of these properties is proved, we can directly give the trivial solution to the operation. This method improves performance by a factor near to 2. To avoid a memory allocation explosion, when we detect a high memory consumption, we continue the code generation process for the remaining recursions with a more naive algorithm, leading to a less efficient code but using far less memory. Instead of separating the projections into disjoint polyhedra (step 3 of the algorithm), we merge them when their intersections are not empty. Then we work with a set of unions, significantly smaller than a set of disjoint polyhedra. Other parts of the algorithm are left unmodified. The drawback of this method is the generation of costly conditionals ruling whether an integral point has to be scanned or not. This method can be compared to using the convex hull of the polyhedra [15, 25, 14, 5], but is more general since it can deal with complex bounds (typically maximum or minimum of parameterized affine constraints e.g. \( \max(m, n) \)) that do not describe a convex polyhedron. 5. Experimental Results We implemented this algorithm and integrated it into a complete polyhedral transformation infrastructure inside Open64/ORC [3]. Such a modern compiler provides many steps enabling the extraction of large static control parts (e.g. function inlining, loop normalization, goto elimination, induction variable substitution etc.). In this section is presented a study on the applicability of the presented framework to large, program representative SCOPs that have been extracted from SPECfp2000 and PerfectClub benchmarks. The chosen methodology was to perform the code regeneration of all these static control parts. Figure 8 gives some informations on the code regeneration problem for a set of SPECfp 2000 and PerfectClub benchmarks. The first two columns gives the total number of SCOPs and iteration domains in the corresponding benchmark. These problems are considered to be hard; previously related experiences with Omega [15] or LooPo [14] showed how it was challenging to producing efficient code just for ten or so polyhedra without time or memory explosion. The two columns of the code generation section shows how many SCOPs have to be partially regenerated in a suboptimal way because of a memory explosion and the total code generation time on a Intel Pentium III 1 GHz architecture with 512 MB RAM. The three challenging problems have a lot of free parameters (13 or 14) that leads to a high code versioning; the biggest one in lucas (more than 1700 domains) took 22 minutes and 1 GB RAM to be optimally generated on a Itanium 1 GHz machine. These results are very encouraging since the code generator proved its ability to regenerate real-life problems with hundreds of statements and a lot of free parameters. Both code generation time and memory requirement are acceptable in spite of a worst-case exponential algorithm complexity. <table> <thead> <tr> <th>Code Generation</th> <th>Robustness</th> </tr> </thead> <tbody> <tr> <td>Sub. Time (s)</td> <td>CodeGen</td> </tr> <tr> <td>lucas</td> <td>1%</td> </tr> <tr> <td>swim</td> <td>1%</td> </tr> <tr> <td>admin</td> <td>1%</td> </tr> <tr> <td>mdg</td> <td>1%</td> </tr> <tr> <td>rigmd</td> <td>1%</td> </tr> <tr> <td>total</td> <td>1%</td> </tr> <tr> <td>Total Domains</td> <td>1%</td> </tr> <tr> <td>Total SCOPs</td> <td>1%</td> </tr> </tbody> </table> Figure 8. Code generation of static control parts in high-performance applications We compared the results achieved by our code generator, CLooG\(^2\), with a previous implementation of the Quilleré et al. algorithm, LoopGen 0.4 [21] (the differences between CLooG and LoopGen are a direct consequence of the improvements discussed in this paper), and the most widely used code generator in the polyhedral model, i.e. Omega’s CodeGen 1.2 [15]. Because of inherent limitations (mainly memory explosion), these generators are not able to deal with all the real-life code generation problems in the benchmark set, the section robustness in Figure 8 gives the percentages of the input problems they are able to deal with\(^3\). These results illustrate the existing need for scalability of code generation schemes. Hence, comparisons are done with the only common subset. The two valuations are the code generation time and the generated code size with respect to the original code size. The results are given in Figure 9. It shows that generating directly a code without redundant control is far more efficient than trying to improve a naive one. Our pattern matching strategy demonstrates its effectiveness, since we observe a significant speedup of 4.05 between CLooG and CodeGen and of 2.57 between CLooG and LoopGen. Generated code sizes by LoopGen are typically greater than CodeGen results by 38% on average because it removes more control overhead at the price of code size. The code size improvement methodology presented in this paper significantly reduces this increase to 6% on average while keeping up the generated code effectiveness. In conclusion, our algorithm is much faster than CodeGen and noticeably faster than LoopGen. LoopGen generates larger code, while our code and the CodeGen code are \(^2\) CLooG is available at http://www.prism.uvsq.fr/~cedb \(^3\) We only consider the code generation ability: for technical reasons, we did not check the correctness of Omega’s CodeGen results. of about the same size. It remains to compare the run time overheads: our code has the same performance as the original code, and we believe this should be true also for LoopGen. For technical reasons, assessing the performances of CodeGen is difficult, and is left for future work. 6. Related Work Ancourt and Irigoine [1] proposed the first solution to the polyhedron scanning problem. Their seminal work was based on the Fourier-Motzkin pair-wise elimination [23]. The scope of their method was very restrictive, since it could be applied to only one polyhedron, with unimodular transformation (scattering) matrices. The basic idea was to apply the transformation function as a change of basis of the loop index, then for each new dimension, to project the polyhedron onto the axis and thus find the corresponding loop bounds. The main drawback of this method was the large amount of redundant control. Most further works on code generation tried to extend this first technique in order to deal with more general transformations. Li and Pingali [20], Xue [27], Darte [9] and Ramanujam [22] relaxed the unimodularity constraint to an invertibility constraint and then proposed to deal with non-unit strides (loop increments can be something different than one). They all use the Hermite Normal Form [23] to find the strides, and the classical Fourier-Motzkin elimination to compute the loop bounds. In addition, Li and Pingali proposed a completion algorithm to build a non-unimodular transformation function from a partial matrix, such as the transformation stay legal for dependences [20]. In the same spirit, Griebl et al. relaxed the invertibility constraint and proposed to deal with arbitrary matrix by using a square invertible extension of this matrix [14]. It is shown in this paper how to deal with general affine transformation functions without constraints on unimodularity, invertibility or even regularity. Alternatively to the Fourier-Motzkin elimination method, Collard et al. [7] presented a loop bound calculation technique based on a parameterized version of the dual simplex algorithm [10]. Another method makes successive projections of the polyhedron on the axis as in [1] but use the Chernikova algorithm [18] to work with a polyhedron represented as a set of rays and vertices [19]. These two techniques have the good property of producing a code without any redundant control (for only one polyhedron), but while the second one can generates a very compact code, the first one can quickly explode in length. The problem of scanning more than one polyhedron in the same code was firstly solved by generating a naive perfectly nested code and then (partially) eliminating redundant guards [15]. Another way was to generate the code for each polyhedron separately, and then to merge them [14, 5]. This solution generates a lot of redundant control, even if there were no redundancies in the separated code. Quilleré et al. proposed to recursively generate a set of loop nests scanning several unions of polyhedra by separating them into subsets of disjoint polyhedra and generating the corresponding loop nests from the outermost to the innermost levels [21]. This later approach provides at present the best solutions since it guarantees that there is no redundant control. However, it suffers from some limitations, e.g. high complexity or needless code explosion. The present work presents some solutions to these drawbacks. 7. Conclusion The current trend in program optimization is to separate the selection of an optimizing transformation and its application to the source code. Most transformations are reorderings, followed optionally by modifications to the statements themselves. The program transformer must be informed of the selected reordering, and this is usually done by way of directives, like tile or fuse or skew. It is difficult to decide the completeness of a set of directives, or to understand their interactions. We claim that giving a scattering function is another way of specifying a reordering, and that it has several advantages over the directive method. It is more precise, it has better compositionality properties, and there are many cases in which automatic selection of scattering functions is possible. This paper provides for this purpose a flexible transformation framework for state-of-the-art parallelization and optimization techniques, by removing any additional constraint to the scattering function affinity. The only drawback was that deducing a program from a scattering function took time, and was likely to introduce much runtime overhead. We believe that tools like CLooG have removed these difficulties. The whole source-to-polyhedra-to-source transformation was successfully applied to the 12 benchmarks with a significant speedup of 4.05 with respect to the most widely used code generator, for the benchmark parts it is able to deal with. Ongoing work aims at reasoning upstream from code generation step. Pointing out the most compute intensive parts in the source programs [6] would allow to drive the code generator to avoid meaningless, time and code size consuming control overhead removing. Another way to reduce both complexity and code versioning is to find the affine constraints on and between every static control part parameters [8]. **Acknowledgments** The author would like to thank Paul Feautrier for his help in writing this paper. Many thanks also to all the CLooG’s contributors and especially to Sven Verdoolaege. **References**
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00017260/document", "len_cl100k_base": 9597, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42381, "total-output-tokens": 11835, "length": "2e13", "weborganizer": {"__label__adult": 0.0003886222839355469, "__label__art_design": 0.00034999847412109375, "__label__crime_law": 0.0003600120544433594, "__label__education_jobs": 0.0004878044128417969, "__label__entertainment": 6.014108657836914e-05, "__label__fashion_beauty": 0.0001697540283203125, "__label__finance_business": 0.0002236366271972656, "__label__food_dining": 0.0004193782806396485, "__label__games": 0.0006532669067382812, "__label__hardware": 0.001552581787109375, "__label__health": 0.0006327629089355469, "__label__history": 0.0002944469451904297, "__label__home_hobbies": 0.0001329183578491211, "__label__industrial": 0.000583648681640625, "__label__literature": 0.00022923946380615232, "__label__politics": 0.00031566619873046875, "__label__religion": 0.0006327629089355469, "__label__science_tech": 0.03228759765625, "__label__social_life": 7.987022399902344e-05, "__label__software": 0.00437164306640625, "__label__software_dev": 0.9541015625, "__label__sports_fitness": 0.0004127025604248047, "__label__transportation": 0.0007724761962890625, "__label__travel": 0.0002551078796386719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45398, 0.03927]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45398, 0.39318]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45398, 0.85749]], "google_gemma-3-12b-it_contains_pii": [[0, 850, false], [850, 4940, null], [4940, 7801, null], [7801, 12125, null], [12125, 16376, null], [16376, 22172, null], [22172, 27177, null], [27177, 30157, null], [30157, 35513, null], [35513, 39701, null], [39701, 45398, null]], "google_gemma-3-12b-it_is_public_document": [[0, 850, true], [850, 4940, null], [4940, 7801, null], [7801, 12125, null], [12125, 16376, null], [16376, 22172, null], [22172, 27177, null], [27177, 30157, null], [30157, 35513, null], [35513, 39701, null], [39701, 45398, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45398, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45398, null]], "pdf_page_numbers": [[0, 850, 1], [850, 4940, 2], [4940, 7801, 3], [7801, 12125, 4], [12125, 16376, 5], [16376, 22172, 6], [22172, 27177, 7], [27177, 30157, 8], [30157, 35513, 9], [35513, 39701, 10], [39701, 45398, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45398, 0.06796]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
541f7ef804542d7a54f35d5e04dabe0d423ab0e2
[REMOVED]
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783319394169-c1.pdf?SGWID=0-0-45-1577571-p179991915", "len_cl100k_base": 9488, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 60238, "total-output-tokens": 14810, "length": "2e13", "weborganizer": {"__label__adult": 0.00037169456481933594, "__label__art_design": 0.0008873939514160156, "__label__crime_law": 0.00034880638122558594, "__label__education_jobs": 0.00225830078125, "__label__entertainment": 0.0001183152198791504, "__label__fashion_beauty": 0.00020420551300048828, "__label__finance_business": 0.0005521774291992188, "__label__food_dining": 0.0003612041473388672, "__label__games": 0.0006465911865234375, "__label__hardware": 0.0006780624389648438, "__label__health": 0.0005393028259277344, "__label__history": 0.0004665851593017578, "__label__home_hobbies": 0.00012409687042236328, "__label__industrial": 0.0006961822509765625, "__label__literature": 0.0007100105285644531, "__label__politics": 0.0003261566162109375, "__label__religion": 0.000614166259765625, "__label__science_tech": 0.11248779296875, "__label__social_life": 0.00014197826385498047, "__label__software": 0.01508331298828125, "__label__software_dev": 0.861328125, "__label__sports_fitness": 0.00028514862060546875, "__label__transportation": 0.0006895065307617188, "__label__travel": 0.0002287626266479492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60401, 0.03693]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60401, 0.50147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60401, 0.88242]], "google_gemma-3-12b-it_contains_pii": [[0, 1247, false], [1247, 3948, null], [3948, 7163, null], [7163, 10399, null], [10399, 12020, null], [12020, 15466, null], [15466, 17088, null], [17088, 20619, null], [20619, 24109, null], [24109, 27162, null], [27162, 28806, null], [28806, 30759, null], [30759, 33637, null], [33637, 36325, null], [36325, 36978, null], [36978, 39951, null], [39951, 41354, null], [41354, 41756, null], [41756, 43103, null], [43103, 44243, null], [44243, 45299, null], [45299, 45932, null], [45932, 46064, null], [46064, 48708, null], [48708, 52481, null], [52481, 56289, null], [56289, 59900, null], [59900, 60223, null], [60223, 60401, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1247, true], [1247, 3948, null], [3948, 7163, null], [7163, 10399, null], [10399, 12020, null], [12020, 15466, null], [15466, 17088, null], [17088, 20619, null], [20619, 24109, null], [24109, 27162, null], [27162, 28806, null], [28806, 30759, null], [30759, 33637, null], [33637, 36325, null], [36325, 36978, null], [36978, 39951, null], [39951, 41354, null], [41354, 41756, null], [41756, 43103, null], [43103, 44243, null], [44243, 45299, null], [45299, 45932, null], [45932, 46064, null], [46064, 48708, null], [48708, 52481, null], [52481, 56289, null], [56289, 59900, null], [59900, 60223, null], [60223, 60401, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60401, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60401, null]], "pdf_page_numbers": [[0, 1247, 1], [1247, 3948, 2], [3948, 7163, 3], [7163, 10399, 4], [10399, 12020, 5], [12020, 15466, 6], [15466, 17088, 7], [17088, 20619, 8], [20619, 24109, 9], [24109, 27162, 10], [27162, 28806, 11], [28806, 30759, 12], [30759, 33637, 13], [33637, 36325, 14], [36325, 36978, 15], [36978, 39951, 16], [39951, 41354, 17], [41354, 41756, 18], [41756, 43103, 19], [43103, 44243, 20], [44243, 45299, 21], [45299, 45932, 22], [45932, 46064, 23], [46064, 48708, 24], [48708, 52481, 25], [52481, 56289, 26], [56289, 59900, 27], [59900, 60223, 28], [60223, 60401, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60401, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2ea82f42f82ca7f667ca736b43e498aa5d98e5f8
Improving Agile Retrospectives The Pearson Addison-Wesley Signature Series provides readers with practical and authoritative information on the latest trends in modern technology for computer professionals. The series is based on one simple premise: great books come from great authors. Books in the Mike Cohn Signature series are personally chosen by Cohn, a founder of the Agile Alliance and highly regarded Agile expert and author. Mike's signature ensures that he has worked closely with authors to define topic coverage, book scope, critical content, and overall uniqueness. The expert signatures also symbolize a promise to our readers: you are reading a future classic. Visit informit.com/awss/cohn for a complete list of available publications. To mom and dad. This page intentionally left blank Contents at a Glance Foreword by Jutta Eckstein .................................................... xv Preface ................................................................................... xix Acknowledgments .............................................................. xxi About the Author ........................................................... xxiii Chapter 1 Retrospectives 101 .......................................................... 1 Chapter 2 Preparing Retrospectives ................................................ 31 Chapter 3 The First Retrospective .................................................. 43 Chapter 4 The Retrospective Facilitator ........................................... 55 Chapter 5 From the Metaphor to the Retrospective ......................... 91 Chapter 6 Systemic Retrospectives ................................................ 119 Chapter 7 Solution-Focused Retrospectives .................................... 155 Chapter 8 Distributed Retrospectives ............................................. 179 Chapter 9 Alternative Approaches ................................................. 193 Chapter 10 Typical Problems and Pitfalls ...................................... 201 Chapter 11 Change Management .................................................. 215 Index ................................................................................. 231 Contents Foreword by Jutta Eckstein ................................................................. xv Preface ............................................................................................. xix Acknowledgments ........................................................................... xxi About the Author ............................................................................. xxiii Chapter 1 Retrospectives 101 ............................................................ 1 1.1 What Is a Retrospective? ...................................................... 1 1.2 New Year’s Eve Retrospective ............................................. 6 1.3 The Retrospective Phase Model ........................................ 8 1.3.1 Phase 1: Set the Stage ........................................... 9 1.3.2 Phase 2: Check Hypothesis ................................... 12 1.3.3 Phase 3: Gather Data .......................................... 13 1.3.4 Phase 4: Generate Insights .................................. 16 1.3.5 Phase 5: Define Experiments .............................. 17 1.3.6 Phase 6: Closing ............................................... 19 1.4 Finding Activities for Each of the Phases ......................... 22 1.4.1 Agile Retrospectives Book ................................. 23 1.4.2 Retromat .......................................................... 23 1.4.3 Retrospective Wiki .......................................... 24 1.4.4 Tasty Cupcakes ............................................. 24 1.4.5 Gamestorming ............................................. 25 1.5 The Prime Directive ........................................................... 26 Summary ......................................................................................... 28 Chapter 2 Preparing Retrospectives ............................................... 31 2.1 Preparation ......................................................................... 31 2.1.1 What Period of Time Should Be Discussed? ......... 31 2.1.2 Who Should Take Part? ...................................... 32 2.1.3 Is There a Topic? .............................................. 33 2.2 The Right Time, the Right Place ........................................ 34 2.3 The Right Material .......................................................... 36 2.3.1 The Right Markers .......................................... 36 2.3.2 The Right Sticky notes .................................... 37 2.3.3 The Right Flipchart Paper ............................... 38 Chapter 3 The First Retrospective .....................................................43 3.1 Preparation........................................................................43 3.2 Set the Stage: Car Comparison .....................................45 3.3 Gather Data......................................................................46 3.4 Generate Insights: 5 Whys...........................................49 3.5 Define Next Experiments: Brainstorming .....................50 3.6 Closing: ROTI.................................................................53 Summary ................................................................................53 Chapter 4 The Retrospective Facilitator............................................55 4.1 How Do I Become a Good Facilitator? .........................55 4.1.1 Respect Different Communication Styles ........58 4.1.2 Paraphrasing.......................................................59 4.1.3 Support Participants..........................................59 4.1.4 Stacking..............................................................60 4.1.5 Encourage..........................................................61 4.1.6 Feedback Emotion ............................................61 4.1.7 Intended Silence................................................62 4.1.8 Listen for Common Ground .............................63 4.2 Visual Facilitation............................................................63 4.2.1 The 1×1 of Visual Structure ..................................64 4.3 Visual Retrospectives ......................................................71 4.3.1 The Speedboat Retrospective ...........................71 4.3.2 Trading Cards.....................................................74 4.3.3 Perfection Game ...............................................76 4.3.4 Force Field Analysis ..........................................78 4.3.5 Sources of Inspiration for Visual Facilitation ...80 4.4 Internal or External ........................................................81 4.4.1 Tips for Internal Facilitators..............................83 4.4.2 External Facilitators..........................................85 4.5 After the Retro Is Before the Retro ...............................87 Summary ................................................................................88 CONTENTS 5.1.3 Generate Insights ............................................... 97 5.1.4 Define Experiments and Hypothesis ..................... 98 5.1.5 Closing ............................................................ 99 5.2 The Soccer Retrospective ........................................... 99 5.2.1 Preparation ..................................................... 100 5.2.2 Set the Stage ................................................... 100 5.2.3 Gather Data ...................................................... 101 5.2.4 Generating Insights .......................................... 102 5.2.5 Define Next Experiments and Hypothesis .......... 102 5.2.6 Closing .......................................................... 103 5.3 The Train Retrospective ............................................. 103 5.3.1 Set the Stage ................................................... 103 5.3.2 Gather Data ...................................................... 104 5.3.3 Generate Insights ............................................ 105 5.3.4 Define Experiments and Hypothesis ............... 106 5.3.5 Closing .......................................................... 107 5.4 The Kitchen Retrospective ......................................... 107 5.4.1 Set the Stage ................................................... 107 5.4.2 Gather Data ...................................................... 108 5.4.3 Generate Insights ............................................. 109 5.4.4 Define Experiments and Hypothesis .......... 111 5.4.5 Closing .......................................................... 111 5.5 The Pirate Retrospective ............................................ 111 5.5.1 Set the Stage ................................................... 112 5.5.2 Gather Data ...................................................... 113 5.5.3 Generate Insights ............................................. 114 5.5.4 Define Experiments and Hypothesis ........... 115 5.5.5 Closing .......................................................... 116 Summary ........................................................................ 117 Chapter 6 Systemic Retrospectives ........................................ 119 6.1 Systems ................................................................ 120 6.1.1 Static and Dynamic ....................................... 122 6.1.2 Complicated and Complex .............................. 122 6.2 System Thinking .................................................. 124 6.2.1 Causal Loop Diagrams .................................... 125 6.2.2 Current Reality Tree ..................................... 137 6.2.3 Limitations of System Thinking .................. 142 Chapter 7 Solution-Focused Retrospectives 7.1 The Solution-Focused Approach 7.1.1 Problem Talk Creates Problems, Solution Talk Creates Solutions 7.1.2 Focus on the Better Future 7.1.3 No Problem Happens All the Time; There Are Always Exceptions That Can Be Utilized 7.1.4 If It Works, Do More of It 7.1.5 If It’s Not Working, Do Something Different 7.1.6 Small Steps Can Lead to Big Changes 7.1.7 Focus on Strength and Skills 7.1.8 Understand and Trust That Each Person Is an Expert in His or Her Own Situation 7.1.9 Keep the Attitude of Not Knowing 7.1.10 Be Patient and Confident 7.1.11 The Prime Directive of Retrospectives 7.2 A Solution-Focused Retrospective in Five Steps 7.2.1 Opening 7.2.2 Set Goals 7.2.3 Find Meaning 7.2.4 Initiate Action 7.2.5 Check Results 7.2.6 A Brief, Solution-Focused Retrospective Chapter 8 Distributed Retrospectives 8.1 Forms of Distributed Retrospectives 8.1.1 Multiple Distributed Teams 8.1.2 Teams with Singly Distributed Employees 8.1.3 Scattered Teams 8.2 The Right Tools 8.2.1 Web Whiteboard 8.2.2 Stormz Hangout 8.2.3 Lino Reader Services Register your copy of Improving Agile Retrospectives on the InformIT site for convenient access to updates and corrections as they become available. To start the registration process, go to informit.com/register and log in or create an account. Enter the product ISBN 9780134678344 and click Submit. Look on the Registered Products tab for an Access Bonus Content link next to this product, and follow that link to access any available bonus materials. If you would like to be notified of exclusive offers on new editions and updates, please check the box to receive email from us. Please visit www.improvingagileretrospectives.com to download accompanying information to the book. I recently read the following story in a daily newspaper. In a hotel in Amman, Jordan, a businessman is waiting in front of an elevator. It is one of those big, lavish hotels, which has, in order to better meet the demands of its guests, placed six elevators next to one another. This businessman waits and waits, but the elevator doesn’t come. The problem is that he is standing so close to the elevator that he fails to notice that some of the other elevators have been at his floor for a long time. Were he to take two paces backward, he would reach his goal more quickly. This story illustrates how we humans tend to cling to an established decision or a previous experience (the elevator we have called will come and not another one). We then blindly follow the old, familiar path—“we’ve always done it this way” or “that’s how it’s always been”—instead of subjecting it to a critical assessment. The fundamental idea of retrospectives is to pause, consider the chosen path and, in order to make better progress in the future, to correct that path by means of a (usually small) change. Actually, this approach is rooted in our DNA: the correct Latin term for the human race is not, as is commonly believed, Homo Sapiens, but Homo Sapiens Sapiens—that is, the human who thinks about thinking (or also, the human who thinks twice). It is exactly this reflection on our normal, everyday experiences that stands at the center of retrospectives. It is often the case in projects or companies that individual team members are well aware of how things might be improved. However, it also often the case that there is insufficient time to examine the possible changes in detail. So nothing is changed, and the result is usually that the team has even less time. This situation is a vicious cycle and is aptly expressed with the old complaint: “We don’t have time to sharpen the saws, we have to saw.” Retrospectives should thus also be considered part of risk management; the constant analysis of events and ensuing course corrections mean that risks can be more quickly recognized and managed. Despite the fact that retrospectives have been principally used in agile software development in order to ensure agility, the regular implementation of retrospectives can be valuable in other areas. The reason for this is partly that, as another old saying goes, you learn through mistakes. However, many companies consider making mistakes a mistake and demand instead that you “do it right the first time.” But in our increasingly complex world, finding out what needs to change is not just the larger part of software development. In other areas, too, the first step is to explore which is the best path to the goal. In order to do that, you must also go down some “wrong” paths; otherwise, you cannot know which are the right ones. The right decisions can thus only be reached through the development of the system—and so you may well ask: why continue with this approach? Simply put, exploration is an inherent component of software development and, in today’s world, many other areas. The fostering or acceptance of a mistake-culture also demands deliberate and constant learning. Thus, through their focus on continuous development, retrospectives also contribute to the establishment of a learning organization. Retrospectives need not only be used to find potential improvements. They also afford the opportunity to raise awareness of what already works well and what has thus far been achieved. Team members can sometimes get to feeling that everything is going wrong, and the result is widespread frustration. Holding a retrospective to examine the work that is being done can help them to see that some things are actually working very well. This can increase the team’s motivation. In this book, Marc has succeeded in giving a truly comprehensive overview of retrospectives: he not only includes proven concrete methods, but also picks up on the latest developments and assesses their usefulness. Marc tackles some far from simple topics—such as divided, systemic, or solution-oriented retrospectives—and makes them practicable. All in all, Marc has created a work that stands alone—that is, a book that offers a solid and practical foundation for those who are new to retrospectives. Furthermore, he has made sure that experienced retrospective facilitators will also find extensive inspiration and guidance in structuring retrospectives more effectively, thus contributing to continuous learning and improvement in organizations. Enjoy using this book to forge a new path or to correct an existing one! **Jutta Eckstein** Author of *Retrospectives for Organizational Change, Agile Software Development with Distributed Teams*, and *Agile Software Development in the Large: Diving Into the Deep* This page intentionally left blank When I started using agile frameworks and did my first retrospective, it was love at first sight. The use of retrospectives to establish a continuous improvement process made perfect sense right from the beginning. I liked the idea of having a dedicated workshop with a clear structure that happens at regular intervals: a place and time that can be used to reflect on what happened the last weeks and months together with your teammates and a place and time to think about potential improvements based on your discoveries. And I still love to do it. Unfortunately, still, many teams ignore the potential of this practice or start using it when it is already too late. This reminds me of one of my favorite metaphors: the lumberjack. Imagine a lumberjack in the woods cutting down trees. Over the last days and weeks, it got harder and harder to cut down the trees. It already takes hours for one tree. But he still continues doing his work, as he has promised to deliver a certain amount of wood. To still be able to deliver on time, he skips breaks, works longer in the evenings and even starts to work on Saturdays and Sundays. But all of these activities do not solve his real problem: He is getting slower and slower. If he took the time to do a retrospective, he would find out that sharpening his ax would be a good idea, or even better, buying a chainsaw. The same concept often applies to our work life. Time and again, we become so busy trying to deliver that we forget to ask whether a better way to do our jobs exists. That’s what agile retrospectives are for. Instead of getting stuck in the current, potentially suboptimal way of working, these dedicated workshops help to find new ways that might improve your situation. From my point of view, agile retrospectives are the cornerstone of a successful, continuous improvement process. Additionally, they are one of the best tools to trigger a cultural change in organizations. They can even help in traditional change initiatives. Of course, agile retrospectives can be used in private life, too. I use them at the end of the year to do a New Year’s Eve retrospective with my family. As agile retrospectives are not regular meetings, but workshops, you must take some things into account to benefit from this technique. At the same time, you always have to cope with resistance in your organization, if you apply the results of your retrospectives. If you were part of one or more change initiatives, you know what I’m talking about. But I guess you bought this book to get some answers, right? In this book, you’ll find all the ingredients you need to facilitate successful agile retrospectives and establish a continuous improvement process. A great agile retrospective is fun, energetic, diversified, has a clear goal and purpose, and takes the system you are currently in into account. I’ll walk you through the steps you must take to get there. I hope you’ll find this book useful and that you enjoy reading it. Acknowledgments I would like to thank all the people who have directly or indirectly helped to create this book—first and foremost, all the teams for which I had the pleasure to facilitate a retrospective in the past years. I also want to thank all the experts who reviewed earlier versions of this book and helped to turn a good book into a great book. These are (in no particular order) Srinath Ramakrishnan, Susanne Albinger, Pierre Baum, Jon Eversett, Gemma Kuijpers, Mateusz Gajdzik, Dennis Wager, and Adi Bolboaca. A big thank you goes out to Veronika Kotrba and Ralph Miarka, who wrote the chapter about solution-focused retrospectives. It adds an additional and valuable perspective on agile retrospectives. Another big thank you goes to Eamonn O’Leary, who translated most of the German text. It saved me a lot of time that I used to add some additional information to this book. I’d also like to thank Lisa Crispin, who connected me with Christopher Guzikowski, my editor at Pearson. Without Lisa, you wouldn’t be able to hold this book in your hands. Special thanks to my wife Andrea, who kept my back free while writing this book. Without her, this wouldn’t be possible. And of course, a big thank you to my two boys Nico and Ben, who had less time with their dad during the last month. About the Author Marc Loeffler is a keynote speaker, author, and agile coach. Before encountering agile methods and principles in 2006, he was working as a traditional project manager for companies like Volkswagen AG and Siemens AG. His passion is to help teams implement agile frameworks like Scrum and XP and to transform our world of work. Marc has a passion for helping teams that are struggling with agile transitions and overcoming dysfunctional behavior. He loves to generate new insights by approaching common problems from the other side and trying to wreak havoc on the process deliberately. This page intentionally left blank The primary purpose of this first chapter is to introduce you to retrospectives. I'll tell you how to use retrospectives in a family context, introduce you to the phase model, and give you some hints for how to fill these phases with life. After this chapter, you will have all the basics to start with your first retrospective, so let's get started. ### 1.1 What Is a Retrospective? Generally speaking, a retrospective (lat. *retrospectare*, "look back") is a review, a look backward. When you lie in bed at night and let the events of the day cross your mind, that is a retrospective. When a family sits down to dinner and talks about the day—the children talking about school and the parents talking about their experiences—that is a retrospective. Looking back over the life’s work of an artist, author, or director is also a retrospective. As part of a retrospective like this, various events take place at which a range of the artist’s work is shown. All the important pieces are collected in a single place to provide a complete picture of the artist’s work. This makes it possible to get a good overall impression and affords the opportunity of comparing and contrasting the different works of art. This would be impossible if we had access to only one example. Only by getting the overall impression is it possible to see the whole and have the opportunity to speculate about why the artist did one thing and not another. Another kind of retrospective takes place on television, usually at the end of every year, in the form of a year-in-review program, where the different broadcasters compete to have the funniest, most beautiful, or most famous people on their programs. Entertainment is the priority here, and there’s not much emphasis on getting a full picture. These year-in-review programs are therefore rather patchy and aren’t really suitable for drawing conclusions or looking at the connections between different events. When I speak of retrospectives in this book, I mean something else. The retrospectives I discuss also involve looking back, but that is just the first step. Much more important is to gain knowledge and insight from this activity. This knowledge and insight can help us learn from the past and adapt accordingly. We can learn from both successes as well as failures; good things can often be made even better. You could compare it to evolution: things that haven’t worked become extinct, but everything that contributes to the preservation of the species is kept and developed further. In the end, each of these adaptations is nothing more than an experiment, because you never know for sure what the result will be. At best, these experiments lead to an improvement of the current situation. Sometimes they do just make things worse, which we then must analyze in the next retrospective. Every retrospective is led by a facilitator, who ensures that the group achieves the goals it sets. He helps the groups to develop practical results that will be the foundation for future success. The facilitator is not a participant (although in small teams this is not always avoidable); he accompanies the process but is not actively involved in implementing solutions. A good facilitator is essential for a successful retrospective. This kind of retrospective was first described by Norman Kerth in his book, *Project Retrospectives: A Handbook for Team Reviews* [1]: *A retrospective is a ritual gathering of a community at the end of a project to review the events and learn from the experience. No one knows the whole story of a project. Each person has a piece of the story.* The retrospective ritual is the collective telling of the story and mining the experience for wisdom. In his book, Kerth explains how retrospectives differ from so-called “postmortems” and “lessons learned.” The main difference is that retrospectives focus on positive future actions and use them as a catalyst for change. They represent not the end of the project, but milestones in the process of continuous improvement. In 2001, several people met in a ski lodge to write a manifesto for agile software development [2]. The foundation of the manifesto consists of four pairs of values and twelve principles. The last of these principles is an excellent description of what happens in a retrospective: At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. This manifesto is one of the main reasons that the agile community in particular enthusiastically incorporated retrospectives into its work process. These people realized that they did not have to wait until the end of a project to learn from what had happened and make appropriate changes. Instead, they organize a retrospective after each iteration; that is, after a certain period. This interval should be no longer than one month. Otherwise, you run the risk of stretching the feedback cycle too far. **What Is an Iteration?** The word *iteration* comes from the Latin “iterare,” which means “repeat.” Iterations are applied in a wide range of areas where problems are solved step by step. In computer science, *iteration* is the name for the process of taking different steps until the desired condition is reached (as with a FOR loop, for example). In Scrum, an iteration is called a “sprint.” I use the term *iteration* to describe the process of running a project in clearly defined, short, repetitive steps. After each iteration, you stop to determine whether and to what extent the project objective has been realized and, if necessary, adapt the original plan. The goal is to keep the risk of uncertainty and surprises to a minimum. The same procedure can also be used in change management. Holding retrospectives enables you to establish a process of continuous improvement, which constantly checks whether or not you are on the right path and also gives you the opportunity to intervene and make any necessary changes promptly. By scheduling a dedicated time for reflection, you give yourself the opportunity to solve problems immediately, instead of having to wait until the end of the project. If you do not hold the retrospective until the end of a project, you run the risk of forgetting what you have learned before the next project. You also gain the opportunity to implement improvements in every iteration. **What Exactly Is the Term “Agile” in This Context?** The word agile comes from the Latin *agilis*, “to do, make, or act.” As described earlier, this agility is based on the 12 principles of the Agile Manifesto [2]. The Agile Manifesto is as follows: We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value: - Individuals and interactions over processes and tools - Working software over comprehensive documentation - Customer collaboration over contract negotiation - Responding to change over following a plan That is, although value exists in the items on the right, we value the items on the left more. The corresponding 12 principles look like this: 1. Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. 2. Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage. 3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale. 4. Business people and developers must work together daily throughout the project. 5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. 6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. 7. Working software is the primary measure of progress. 8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. 9. Continuous attention to technical excellence and good design enhances agility. 10. Simplicity—the art of maximizing the amount of work not done—is essential. 11. The best architectures, requirements and designs emerge from self-organizing teams. 12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly. 1.2 New Year’s Eve Retrospective A few years ago, my family and I started a new Year’s tradition. We call it the New Year’s Eve Retrospective. Not only is it a lot of fun, but it also helps pass the time until midnight (especially helpful with children). The New Year’s Eve Retrospective goes like this: To start, we all sit down together and look at some photos and watch some short videos that we took during the year. I’ve prepared a USB stick with the photos and videos beforehand. This phase of our retrospective is always loads of fun and results in a lot of laughter. After this review, we have a look at our measures and hypotheses from the year. This is important because it is the only way we can determine whether or not the resolutions we made last year had the desired effect. If they didn’t, we can decide whether the subject is still relevant and choose a new measure. After reviewing our hypotheses, we start to recollect all the things about the last year that have been particularly memorable. We use three categories: - What did I like this year? - What I did not like at all this year (or what made me angry)? - Thank you The first category is for all of those things that were fun or made us happy; for example, our family holiday in a kyrgyz yurt. The second category includes all the negative events. These are things like “socks everywhere” or “annoying parents.” The third category simply serves to say “thank you” to your wife or mom, to the children or siblings, and so on. Connecting your gratitude to a specific case is always important. For example, “Thanks for letting me play with your Skylander toys” or “Thank you for making me a snack every morning.” It is then time to gain knowledge and insights. Each family member is allowed to choose a topic that he or she finds particularly important, and these topics are discussed in turn. The goal of these discussions is to find the underlying causes for the topic. At the moment, we’re finding the 5-Why question method very valuable here ### 5-Why Method This method starts with the question: “why is x happening?” or “why does x always happen?” The answer serves as the basis for the next “why” question. You then repeat the process, digging deeper and deeper until, hopefully, you’ve found the real cause. We make sure to write this cause on a piece of paper because it is the foundation of the next phase. The 5-Why method is around 100 years old and was created by Sakichi Toyota [5], the founder of Toyota, to get to the bottom of production problems and so prevent them from re-occurring. The next step is to use the causes we’ve found to create concrete, measurable resolutions for next year. To this end, we have a short brainstorming session to collect ideas about our topics. You wouldn’t believe the ideas children can come up with, even for the topics closer to their parents’ hearts. Everyone presents his or her ideas for each topic, and we choose the most promising idea. We make our choice by sticking colored dots up next to the ideas on the paper. This technique is called “Dot Voting.” Each of us has three sticky dots, which we can put wherever we like. Once we’ve finished, we place the newly chosen measures in a prominent place: our family corkboard in the hall, which is our highly visual to-do list. There is nothing worse than results that are not visible after the retrospective. Our board helps us to keep an eye on our new measures and ensure that we actually implement them. Importantly, we also link each measure to a testable hypothesis that we can review in the next retrospective. Of course, a retrospective also needs a worthy ending. In this case, the choice is easy: the New Year’s Eve fireworks. 1.3 The Retrospective Phase Model If you were paying close attention in the preceding section, you might have noticed that we went through six phases during the New Year’s Eve retrospective, as illustrated in Figure 1-1. ![Figure 1-1 The six phases of a retrospective](image-url) These form the structure of a retrospective and are based on the original phase model in Esther Derby and Diana Larsen’s book [5]. The model I describe here is an expanded form of Derby and Larsen’s, the big differences being that I introduced the “Check Hypothesis” step and extended the “Define Experiments” step to include hypotheses. I explain the reasons for this later in the book. In the following sections, I explain the six phases in more detail. 1.3.1 Phase 1: Set the Stage The first phase of a retrospective should set the stage. This phase is very important because every participant has to be mentally “picked up” from somewhere else. If you leave out this phase, you run the risk of one or more participants being mentally absent from the retrospective as they are still thinking about the last piece of work they were doing before walking in. Preparing the ground serves to get all the participants’ attention and get them involved. Starting with a few words of welcome and thanking everyone for taking part is best. Then you as a facilitator briefly explain the reason for and the aim of the retrospective as well as the timeframe and the agenda. The agenda is important because, after all, we all want to know what we’re spending our time on. **Practical Tip** Make sure that everyone in the room says something (brief). Someone who is silent at this stage is likely to remain so for the rest of the retrospective. However, it is very important that every voice be heard, because only then you will be able to get a complete picture. The participants don’t all have to tell long stories; a few words per person is enough. For example, you might have people say their name or describe their expectations of the retrospective in a single word. Interestingly, this simple technique works so well in most cases that the quieter and silent team members will also participate in the discussions. The last step of the first phase is also very important. The aim is to create an atmosphere in which difficult topics can be addressed. Only in an atmosphere where even unpleasant things can be discussed is it possible to get to the bottom of things and to address the real causes of problems. Moreover, that is the basis for a successful retrospective. What happens in Vegas, stays in Vegas. You create this atmosphere by establishing the rules for cooperation, or the “working agreement.” Some teams have already defined the values they have for their daily work, and in that case, you should use those values and simply remind the team of them. You might need to adjust a few values to the retrospective. The same applies, of course, if the team has already defined rules for collaboration. Many agile teams create a team charter at the beginning of their collaboration. **What Is a Team Charter?** A team charter defines all the rules for teamwork, including the rules for communication and conduct as well as the timing and length of regular meetings. Software development teams also have a list of the development tools they use and possibly links to further information. The team charter is, among other things, a good starting point for new team members. It should be a living document that is iteratively developed. If any team member feels and expresses that the charter should be adjusted, then the team discusses that request and, upon agreement, adjusts. If there are no rules for cooperation yet, now is the time to define them. However, why are these rules so important? The following is a brief example. Let’s say your colleague James has the habit of taking his laptop with him into every meeting. He uses the time in these meetings to answer his e-mails or surf the web. If you start the retrospective without clearly pre-established rules, he will probably do that same thing. It will annoy everyone, but no one will have the rules to point to and ask him to close the laptop. However, if the rules have been defined in advance, they can be pointed out at any time. Another advantage of having common rules for cooperation is that all the participants are responsible for observing them. This makes it easier for the facilitator to concentrate on the actual work of the retrospective. **Practical Tip** If the team does not yet have a team charter, invite members to a workshop immediately after the retrospective in order to create one. Unfortunately, this is the phase of the retrospective that is most frequently skipped because people want to save time and get started right away. In my experience, taking a team through this phase has never been a waste of time. If the team has been working together for a long time, it often takes no more than five minutes. Five minutes - that minimize the risk of someone not speaking - that make sure that everyone feels they are in a safe environment in which to work - to get everyone present and let them clear their heads for this important meeting Sometimes it can also be five minutes of fun. For example, you might ask the team: “If the last iteration were a car, what kind of car would it be?” All it takes is one or two words, and you get everybody mentally present. **Check-In** This check-in technique is described in Derby and Larsen’s book [5, p. 42] and is implemented after you have welcomed the participants and presented the goal of the retrospective. The facilitator asks a short question, which each participant answers in turn as quickly as possible. Here are a few example questions: - In one or two words, what do you hope for from this retrospective? - If the last iteration were a country, which country would it be? - What kind of weather word (sunny, cloudy, rainy, thunderstorm) would you use to describe your present mood? It is okay for a participant to say “pass.” Even that one word is enough for his voice to be heard. As a reminder, in our New Year’s Eve retrospective, we set the stage by looking at the photos and videos from the past year. Believe me—it’s a lot of fun! ### 1.3.2 Phase 2: Check Hypothesis The purpose of the Check Hypothesis phase is to review the hypotheses created at the last retrospective. Ideally, these hypotheses are created from the experiments chosen (see section 1.3.5). However, why is this step so important? Let’s say that during the last retrospective you discussed the problem of very poor communication with the product management team. The product manager is hard to reach, and questions are only answered after major delays. At the end of the last retrospective, you decided on a measure to be taken: The product manager would now be available to the team for a specific time slot every day. This time would be for discussing current questions and getting answers, thus reducing delays to a minimum. The hypothesis that you connected to this experiment might have been as follows: “Current questions will now be answered in less than 24 hours.” This would be a real improvement on the recent situation, in which the team sometimes had to wait several weeks for a response. After the stage has been set, the team checks the hypotheses. It turns out that the experiment was wrong. It seems that although the response time is getting a little better, it is still far from the 24-hour mark. So, the problem remains. In the further course of this retrospective, the team will, therefore, try to pinpoint the causes of this problem and then either adapt the current experiment or define a new one. During this process, it might discover, for example, that the product manager was never consulted about the new change and was simply told to implement it. Rather than motivating him to work more closely with the team, this just made him angry. Using hypotheses enables the team to work on a topic until the problem is either solved or reduced to a tolerable level. **Practical Tip** If any of your hypotheses are not confirmed as you expected, use the next phases of the retrospective to find out why. This example shows that hypotheses are an important tool. Some teams merely check whether or not the measures chosen in the previous retrospective were actually implemented. Only a few bother to check whether those measures also had the desired effect. However, only by checking for the desired effect can you actually create improvement. This is certainly not a panacea, but it is effective in most cases. Hypotheses also help to make retrospectives meaningful and help you to stay focused on a topic instead of letting the discussion wander. ### 1.3.3 Phase 3: Gather Data Now we come to the actual looking back invoked in the word “retrospective.” The aim of the Gather Data phase is to collect data on a clearly defined period from the past. This could be the last iteration IMPROVING AGILE RETROSPECTIVES (or sprint in Scrum), the period of an entire project, or even the last working day. The time between an event considered and the retrospective should be kept as short as possible. Your main goal in this phase is to create a common understanding of the period you are considering. Without this common picture, the participants might not understand one another’s perspectives and opinions and will tend to project their feelings onto others. To create a common picture, everyone gets the opportunity to present his or her view of things. You start by collecting the hard facts. These facts can be anything that took place during this period, from meetings and decisions to personal experiences. Include everything that had and has a meaning for anybody on the team here. Numbers (measures) might also feature in this step; for example, the number of completed requirements, or the number of closed, open, and new errors. The more memorable the result, the better. You could simply talk about all of these things, but including a visualization is much better. A visualization simplifies the recording of information and is indispensable, especially in the case of longer retrospectives. One example of a visualization is a timeline laid out on a wall, which allows you to see the temporal relationships between events (see Figure 1-2). Figure 1-2 Gather data using a timeline Although the hard facts are important, they are still only part of the story. Just as important are the personal perspectives that people have on the time being considered because these tell us which events are more important and which are less so. Collecting both facts and personal perspectives helps to focus on the issues that have most affected the team. At the same time, the emotional quality of these perspectives also reveals the situations in which people felt good. Knowing when people felt good gives you the opportunity to re-create this situation more often. A further reason to discuss emotional issues is that, though they have the potential to become a drain on energy and motivation in daily working life, they are often overlooked. Only by talking to your team can you find out what is going on and put yourself in a position to address concerns, eliminate negatives, and strengthen positives. **Definition of the Term “Team”** When I use the term *team* in this book, I’m talking about any form of team in the professional context. This could be a software development team, a team of HR people, or any other kind of team. It could even be the team of your sport club. In other words, a team is a group of people working together to achieve a common goal. Before moving on to the next step, take the time with the team to get an overall picture of the period you are reviewing. You can do this by having each team member present his or her insights, or by giving the whole team some time to reflect on the information you have collected (using the timeline, for example). Reminder: In the New Year’s retrospective, we collected the data by *sorting* events into three categories: - What did I like this year? - What did I not like at all this year (or what made me angry)? - Thank you Each of us then briefly presents the topic we’ve chosen. By using emotional words in the question, we set ourselves up to get a combination of hard facts and feelings. Experience has shown me that this phase of the retrospective should be varied very often. I will talk about possible variations in the chapters throughout this book. If you can’t wait, have a look at the later section 1.4. 1.3.4 Phase 4: Generate Insights You use the Generate Insights phase to understand the context of the situation as well as possible causes and connections. You analyze the events collected in the previous phase and then ask, “Why did these things happen?” What you are looking for are insights into the fundamental causes of the events that took place. After the first phase, this phase is the next most frequently omitted. Many teams skip this phase and immediately try to define future experiments without considering the possible causes of the current situation. This means that they only ever scratch the surface and that their measures will only treat the symptoms instead of dealing with the root causes. It’s like using pain killers, when you actually broke your leg. The pain will vanish for a short period of time, but because the root cause wasn’t addressed, the pain will come back. This is not a good idea because what might seem a promising path out of your problem often leads you straight back into it. On the other hand, carrying this phase out well provides you with a solid foundation for the next phase: define experiments and their hypotheses. Do not try to tackle all of your problems at once. Instead, choose the issues that the group feels are the most important. You won’t be able to solve all of your problems in a single retrospective. This phase is designed to help the team step back, see the whole picture, and start looking for the root causes. It doesn’t make sense to work on more than three topics during the next iteration, as these topics won’t be the only thing you have to work on, right? You need the insights gained in this phase to define reasonable and effective measures. Remember, during our New Year’s Eve retrospective, every family member is allowed to choose the topic that is most important to him or her and which he or she would like to discuss at this stage. We currently use the “5 Whys” to look for causes. When our children get older, we will vary the technique we use. 1.3.5 Phase 5: Define Experiments The first four phases have set up a strong foundation for the Define Experiments phase. You’ve created an overall picture and common understanding of the period under consideration and have also gained some insight into the various events that took place. At this point, most of the team will already have some ideas about what to improve or try out next. So the team’s next task is to choose one or two actions and to agree on how to implement them. This also ensures that the team will have the time to implement its decisions. After all, the daily workload still must be done. Trying to implement too many changes at once can lead to problems. It also makes it more difficult to tell later which experiments actually had an effect. I use the word ‘experiments’ deliberately here. Nobody knows what will happen if you try something out. Although we may have an idea of what might happen (our hypothesis), no one can actually be sure. An analogy for this is a lab researcher who creates an experiment to test his hypothesis. Only at the end of the experiment will he be able to tell whether or not it actually worked. The most effective way to work with these experiments is to repeat your retrospectives at regular intervals that are as short as possible. This creates a safe space: An experiment that is going wrong will make less mess if you cut it off quickly rather than let it run rampant. Just as important as the definition of the experiment itself is the definition of the corresponding hypothesis. You carry out experiments not (just) for fun, but because you think it will create an effect. The hypothesis allows you to determine the extent of an experiment’s effect in the next retrospective. So, hypotheses must be testable. A hypothesis such as, “This will lead to fewer errors in the software” is vague and harder to assess meaningfully. A better version of this hypothesis is, “The number of known errors in the software will be reduced to ten or less.” You must always consider how your hypothesis is to be tested. This is the only way to make hypotheses meaningful and to use them to define new experiments if the first proves unsuccessful. **Practical Tip** Explicitly explain to the team that any action defined in this phase is nothing more than an experiment. No one can be certain beforehand of what the actual outcome will be. Making the results of the retrospective visible to everyone is good practice. Agile teams, like a Scrum team, always include the defined experiments in the next planning session. The experiments chosen are considered part of the normal workload and are not extra tasks. That’s exactly how it should be. It is also important that the team is willing to carry out these tasks. Having a single person take responsibility for each experiment is best. This person does not have to carry out the experiment alone but is responsible for ensuring that action is taken. If nobody is assigned responsibility now, you’re likely to find that no one feels responsible for carrying out the experiment. We used sticky dots (like those shown in Figure 1-3) to choose the experiments during the New Year’s Eve retrospective. We then displayed these experiments on our corkboard to keep their status in mind. There’s nothing worse than task lists that get lost in some document, wiki, or email. 1.3.6 Phase 6: Closing To conclude, spend a few minutes on a short review and celebrate the results of your retrospective. This honors the time and energy that the team has put into both the retrospective and the preceding time span or iteration. You should also document your results appropriately. There are many ways to do this, including taking photographs of the whiteboard and keeping the flipchart paper the team used to develop their ideas. As described earlier, display these things very visibly in the team’s workroom. Finally, the facilitator summarizes on how to proceed. This is to check that everyone understands the plan. As a very last step, having a brief retrospective on the retrospective itself is always a good idea. After all, you want continuous improvement to extend to your retrospectives, too. One tool for this is a ROTI (Return on Time Invested) graph, as shown in Figure 1-4. ![Figure 1-4 ROTI (Return on Time Invested)] What Is a Return on Time Invested (ROTI) Graph? A ROTI graph is often used after a meeting to get some quick feedback from a team. It is a good way to determine whether your retrospectives are working well or whether they need to be improved. To create a ROTI graph, simply draw x and y axes and then a diagonal line numbered from one to five. One means, “This meeting was a total waste of time.” Three means, “This meeting was just about worth the time I invested in attending.” Five means, “This meeting was absolutely fantastic; the time I invested in attending paid off incredibly well.” Each participant adds a cross to the graph to show his or her opinion, and the result is the completed graph. As you can see in Figure 1-4, this team was quite happy with their retrospective. My family and I were able to celebrate our New Year’s Eve retrospective with a beautiful fireworks display. Unfortunately, you can’t do this every day, but a delicious slice of cake at the end of a retrospective can also provide a great ending. Practical Tip The time you’ll need to spend on each phase of a retrospective depends, of course, on the activities you select for each phase, as well as on the total amount of time at your disposal. In general, however, the time you’ll spend on each phase can be reliably calculated as a percentage of the total time. By way of example, here are the phase timings for a 60-minute retrospective: 1. Set the stage (5 minutes = 1/12 time) 2. Check hypotheses (5 minutes = 1/12 time) 3. Gather data (10 minutes = 1/6 time) 4. Generate insights (20 minutes = 1/3 of the time) 1.4 Finding Activities for Each of the Phases The six phases are just a framework which helps you to structure retrospectives. Like many frameworks, it tells you what to do, but does not specify how. Your task then, is to bring these phases to life and you do that by finding a range of activities to carry out in each of the phases. The activities you choose should be appropriate to the goal of each phase and, when you’re still new to retrospectives, it to finding something suitable can be difficult. Practical Tip As you’re starting out, avoid the temptation to find a new activity for each phase every time you do a retrospective. Just try out a few activities at first. Many experienced retrospective facilitators have written about their ideas and made them available in books and on the Internet. In the following sections, I present some of the sources I have used. Later in the book, you will also learn a few techniques for generating your own activities, but the following sources are an excellent place to start. **Practical Tip** When choosing activities, make sure that they dovetail. You need to be able to use the results you get from an activity in one phase in the following phase. You can’t choose activities at random. Just as you’ll only be able to cook a good meal if your ingredients work well together, you’ll only have an effective retrospective if your activities work well together. **1.4.1 Agile Retrospectives Book** “Agile Retrospectives: Making Good Teams Great” by Esther Derby and Diana Larsen [5] was the first book to discuss retrospectives in the context of agile software development and is one of the key texts on retrospectives in general. After a brief introduction to the topic and the description of the phase model, the writers swiftly move on to the practical component. Eighty percent of the book consists of descriptions of activities that can be carried out in the different phases. The description of each activity includes the goal, the time required, the individual steps, the materials required, and possible variations. Derby and Larsen describe a total of 38 activities, which provides enough material for quite a few retrospectives. Combining these activities in different ways means you can keep a sense of variety and novelty in your retrospectives over a long period. **1.4.2 Retromat** I came across the Retromat [6] website by chance and have recommended it as often as possible ever since. No other source enables you to find activities for your retrospectives as easily as it does. It was created by Corinna Baldauf [7]. The first time you visit the site, you immediately get a suggested retrospective plan with different activities proposed for each phase. If you don’t like those activities, you can either generate a completely new plan, or click through different activities per phase until you find what you want. The activities on the site come from various sources, including Derby and Larsen’s book. Each plan has a reference number that allows you to find it again or share it with other people. As of the writing of this book, Retromat offers 131 activities, and more are always being added. Retromat also allows you to enter your own activities. 1.4.3 Retrospective Wiki Another great source for ideas on designing your retrospective is the Retrospective Wiki [8], which contains a list of possible activities and complete plans. This wiki also features some tips and tricks, descriptions of typical problems and potential solutions, and links to further sources. Many of the activities included will be familiar from the other sources I’ve described, but you will also find some new ideas. This wiki is constantly expanded and maintained. 1.4.4 Tasty Cupcakes Tasty Cupcakes isn’t really dedicated to retrospectives but features a wide range of games and simulations that can be used in all areas of life. For example, you might find a workshop on product innovation or a simulation to make it easier to understand a particular topic. This website was created by Michael McCullough and Don McGreal after they presented a variety of games at the Agile2008 conference. They were assisted by Michael Sahota. Several of the ideas on the site can be used in retrospectives. Just click on the words “retrospective” or “retrospectives” in the tag cloud to get a list of possible activities. This site is constantly being expanded and maintained, so having a look from time to time is worthwhile [9]. 1.4.5 Gamestorming *Gamestorming* [10] is a wonderful collection of creative games that support innovation and implementing change. Some people might be put off by the word ‘game,’ but the creative techniques presented in the book are more like playful approaches to work than games. This book is a practical reference with a total of 88 different activities, most of which can be used very easily in retrospectives. After all, a retrospective is nothing if not a catalyst for change. The activities are divided into four categories: - Core Games - Games for Opening - Games for Exploring - Games for Closing The names of these categories have some overlap with the six phases of a retrospective. “Games for Opening,” for example, are likely to work well in the “Set the Stage” phase. The activities listed under “Games for Exploring” are suited to both the “Gather Data” as well as “Generate Insights” phases. “Games for Closing” can be used in “Define Experiments” and to conclude the retrospective. Here is a possible plan for a retrospective using activities from “*Gamestorming*”: - Set the Stage: Draw the Problem (p. 90) - Gather Data: Pain-Gain Map (p. 190) - Generate Insights: Understanding Chain (p. 218) - Define Experiments: Prune the Future (p. 247) - Closing: Plus/Delta (p. 246) The book provides key information for each activity, including the goal, a detailed description of the process, and an approximate runtime, which helps with planning. Also included is a piece of information that is important if you want to carry out the activity effectively: the number of participants. In addition to the activities, the book features a good introduction to the idea of game storming as well as provides you with the information you need to start creating your own activities. This book is a must for anyone serious about retrospectives and implementing change. **1.5 The Prime Directive** Some facilitators begin their retrospectives by reading out the fundamental principle, the Prime Directive. First articulated by Norman Kerth in his book, *Project Retrospectives: A Handbook for Team Reviews* [1], the Prime Directive is designed to set the stage for the retrospective: > Regardless of what we discover, we understand and truly believe that everyone did the best job they could, given what they knew at the time, their skills and abilities, the resources available, and the situation at hand. This principle is read aloud at the beginning of a retrospective, precisely in this wording. The idea is to make it clear to everyone that we are all human and make mistakes. The principle also points out that we shouldn’t assume that things have been done badly deliberately. **Practical Tip** You don’t need to read out the Prime Directive at every retrospective. In later retrospectives, simply reminding people of it is enough. Many retrospective facilitators swear by the Prime Directive. They feel that retrospectives that don’t start with this fundamental principle are less effective and therefore less useful. Pat Kua writes [Kua 2012] that this is related to the Pygmalion [11] or Rosenthal effect, or what is commonly known as “‘a self-fulfilling prophecy.’” The effect of a teacher’s preconceptions about his students might be an example of the Rosenthal effect. The idea is that a teacher’s positive preconception about a student (‘that student is a high achiever’) will affect the teacher’s behavior in such a way as to create confirmation of his expectations. What happens is that the teacher subtly transmits his preconception to the student through, for example, more one-to-one attention, more time given for response, frequency and strength of praise or blame, or high-performance requirements. This is an unconscious rather than deliberate course of action. In essence, the theory is that someone who is treated as having certain characteristics will manifest them. In fact, Rosenthal’s results were repeatedly called into question and could only be reproduced in 40 percent of cases [11]. I personally believe that the success of a retrospective depends not on the careful reading out of the Prime Directive, but rather upon the values that it describes. I have carried out many successful retrospectives during which I did not explicitly mention the Prime Directive. I’m not saying that reading the principle isn’t a good thing; in new teams or established teams that are about to experience their first retrospective, this ritual can have a very positive, if not measurable, effect. In my experience, however, you lose that positive effect if you read out the directive at every retrospective. Repetition does to the directive what frequent flying does to pre-flight safety briefings. The first time you fly, you pay close attention. However, with prolonged exposure, you pay less and less attention until, in the end, you hardly notice it’s happening. A positive attitude is essential for a successful retrospective, but I believe there are many ways to achieve that attitude and the Prime Directive is only one (and one that is certainly no guarantee of success). There is also an alternative prime directive that is somewhat longer but may work better for some teams [12]. I personally like the fact that it is written in the first person and is thus more appealing: Some days are better than others. Some days I’m in the “flow” state, doing awesome work. Some days I come to the end of a day and realized I’ve wasted a lot of time, made mistakes that I should have foreseen, or wish I could have done something differently. Regardless, those days have happened and our purpose here is to find out: What can we learn from our past actions and thinking that will inform and guide our future actions and thinking so that we can do a little better? How can we change our environment (“the system”) so that it’s easier for us to do awesome work and less likely for us to waste time and make mistakes? Like the original Prime Directive, this version describes the goal of a retrospective and articulates the underlying principles. Also like the original, this alternative is just a tool and does not guarantee a successful retrospective. My advice is that you experiment with both versions and see what kind of an impact it has on your retrospectives. When properly used, the Prime Directive can be a valuable tool. **Summary** In this book, I describe what retrospectives are and how to use them to establish a process of continuous improvement. Looking back into the past is only a part of a retrospective, and not even the most important. Retrospectives should be used to help you gain insights and try new things, to create and carry out experiments and to question them, too. That is the best way to support a goal-oriented and meaningful process of continuous improvement and constant learning. Although retrospectives are still most commonly used in working life, as at the end of projects or in the form of “heartbeat” retrospectives in agile teams, they can be usefully applied to any area of life, as in our New Year’s Eve retrospective. A six-phase process that defines the framework for retrospectives will help you to make retrospectives as effective as possible: - Set the Stage - Check Hypotheses - Gather Data - Generate Insights - Define Experiments - Closing Each phase can be brought to life with a range of activities, which, when regularly changed, will bring fresh energy and ideas into the process. You can either design these activities yourself or turn to one of the many books or websites available. Starting retrospectives by reading out the Prime Directive can help to prepare the ground for a successful retrospective, but you should remember that doing so does not guarantee a successful outcome. Ultimately, the success of a retrospective lies with the facilitator and the participating team. In the chapters that follow, I describe the keys to success and the common pitfalls to avoid. References
{"Source-Url": "http://ptgmedia.pearsoncmg.com/images/9780134678344/samplepages/9780134678344_Ch01Sample.pdf", "len_cl100k_base": 13693, "olmocr-version": "0.1.51", "pdf-total-pages": 55, "total-fallback-pages": 0, "total-input-tokens": 90076, "total-output-tokens": 16307, "length": "2e13", "weborganizer": {"__label__adult": 0.0006504058837890625, "__label__art_design": 0.0025691986083984375, "__label__crime_law": 0.00023651123046875, "__label__education_jobs": 0.021820068359375, "__label__entertainment": 0.00032329559326171875, "__label__fashion_beauty": 0.0003535747528076172, "__label__finance_business": 0.00323486328125, "__label__food_dining": 0.0006561279296875, "__label__games": 0.002483367919921875, "__label__hardware": 0.0008664131164550781, "__label__health": 0.0006589889526367188, "__label__history": 0.0008015632629394531, "__label__home_hobbies": 0.0006608963012695312, "__label__industrial": 0.0006241798400878906, "__label__literature": 0.0018320083618164065, "__label__politics": 0.0003230571746826172, "__label__religion": 0.0006437301635742188, "__label__science_tech": 0.0164642333984375, "__label__social_life": 0.0006170272827148438, "__label__software": 0.01270294189453125, "__label__software_dev": 0.9296875, "__label__sports_fitness": 0.0007925033569335938, "__label__transportation": 0.0007619857788085938, "__label__travel": 0.0004279613494873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68056, 0.0335]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68056, 0.33408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68056, 0.93498]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 31, false], [31, 755, null], [755, 755, null], [755, 755, null], [755, 771, null], [771, 806, null], [806, 2218, null], [2218, 2218, null], [2218, 4918, null], [4918, 7361, null], [7361, 10176, null], [10176, 11272, null], [11272, 11272, null], [11272, 11972, null], [11972, 13873, null], [13873, 15896, null], [15896, 16785, null], [16785, 16820, null], [16820, 18763, null], [18763, 19805, null], [19805, 21110, null], [21110, 21110, null], [21110, 21713, null], [21713, 21748, null], [21748, 23181, null], [23181, 25367, null], [25367, 27098, null], [27098, 28814, null], [28814, 30171, null], [30171, 31089, null], [31089, 32959, null], [32959, 34178, null], [34178, 36090, null], [36090, 37844, null], [37844, 39336, null], [39336, 40975, null], [40975, 42918, null], [42918, 44327, null], [44327, 46138, null], [46138, 48246, null], [48246, 50197, null], [50197, 51926, null], [51926, 52355, null], [52355, 52879, null], [52879, 54482, null], [54482, 55362, null], [55362, 57077, null], [57077, 58966, null], [58966, 60399, null], [60399, 62028, null], [62028, 64087, null], [64087, 65827, null], [65827, 67307, null], [67307, 68056, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 31, true], [31, 755, null], [755, 755, null], [755, 755, null], [755, 771, null], [771, 806, null], [806, 2218, null], [2218, 2218, null], [2218, 4918, null], [4918, 7361, null], [7361, 10176, null], [10176, 11272, null], [11272, 11272, null], [11272, 11972, null], [11972, 13873, null], [13873, 15896, null], [15896, 16785, null], [16785, 16820, null], [16820, 18763, null], [18763, 19805, null], [19805, 21110, null], [21110, 21110, null], [21110, 21713, null], [21713, 21748, null], [21748, 23181, null], [23181, 25367, null], [25367, 27098, null], [27098, 28814, null], [28814, 30171, null], [30171, 31089, null], [31089, 32959, null], [32959, 34178, null], [34178, 36090, null], [36090, 37844, null], [37844, 39336, null], [39336, 40975, null], [40975, 42918, null], [42918, 44327, null], [44327, 46138, null], [46138, 48246, null], [48246, 50197, null], [50197, 51926, null], [51926, 52355, null], [52355, 52879, null], [52879, 54482, null], [54482, 55362, null], [55362, 57077, null], [57077, 58966, null], [58966, 60399, null], [60399, 62028, null], [62028, 64087, null], [64087, 65827, null], [65827, 67307, null], [67307, 68056, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68056, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 68056, null]], "pdf_page_numbers": [[0, 0, 1], [0, 31, 2], [31, 755, 3], [755, 755, 4], [755, 755, 5], [755, 771, 6], [771, 806, 7], [806, 2218, 8], [2218, 2218, 9], [2218, 4918, 10], [4918, 7361, 11], [7361, 10176, 12], [10176, 11272, 13], [11272, 11272, 14], [11272, 11972, 15], [11972, 13873, 16], [13873, 15896, 17], [15896, 16785, 18], [16785, 16820, 19], [16820, 18763, 20], [18763, 19805, 21], [19805, 21110, 22], [21110, 21110, 23], [21110, 21713, 24], [21713, 21748, 25], [21748, 23181, 26], [23181, 25367, 27], [25367, 27098, 28], [27098, 28814, 29], [28814, 30171, 30], [30171, 31089, 31], [31089, 32959, 32], [32959, 34178, 33], [34178, 36090, 34], [36090, 37844, 35], [37844, 39336, 36], [39336, 40975, 37], [40975, 42918, 38], [42918, 44327, 39], [44327, 46138, 40], [46138, 48246, 41], [48246, 50197, 42], [50197, 51926, 43], [51926, 52355, 44], [52355, 52879, 45], [52879, 54482, 46], [54482, 55362, 47], [55362, 57077, 48], [57077, 58966, 49], [58966, 60399, 50], [60399, 62028, 51], [62028, 64087, 52], [64087, 65827, 53], [65827, 67307, 54], [67307, 68056, 55]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68056, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
95158221795585cdb5a556e673b4ebeddc1bb8db
[REMOVED]
{"Source-Url": "http://www.havelund.com/Publications/mesa-rv-2020.pdf", "len_cl100k_base": 9928, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 52760, "total-output-tokens": 13731, "length": "2e13", "weborganizer": {"__label__adult": 0.0007085800170898438, "__label__art_design": 0.0004355907440185547, "__label__crime_law": 0.0006475448608398438, "__label__education_jobs": 0.0009260177612304688, "__label__entertainment": 0.00019049644470214844, "__label__fashion_beauty": 0.00026535987854003906, "__label__finance_business": 0.0003006458282470703, "__label__food_dining": 0.0005660057067871094, "__label__games": 0.001911163330078125, "__label__hardware": 0.0029735565185546875, "__label__health": 0.0006012916564941406, "__label__history": 0.0007548332214355469, "__label__home_hobbies": 0.0001367330551147461, "__label__industrial": 0.00096893310546875, "__label__literature": 0.0004646778106689453, "__label__politics": 0.0007524490356445312, "__label__religion": 0.0007715225219726562, "__label__science_tech": 0.1239013671875, "__label__social_life": 0.00017559528350830078, "__label__software": 0.019073486328125, "__label__software_dev": 0.83349609375, "__label__sports_fitness": 0.0006246566772460938, "__label__transportation": 0.00897216796875, "__label__travel": 0.0005006790161132812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55357, 0.03485]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55357, 0.35238]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55357, 0.88311]], "google_gemma-3-12b-it_contains_pii": [[0, 2541, false], [2541, 5798, null], [5798, 9090, null], [9090, 12525, null], [12525, 14914, null], [14914, 17416, null], [17416, 20677, null], [20677, 23008, null], [23008, 25423, null], [25423, 28728, null], [28728, 30628, null], [30628, 33322, null], [33322, 36202, null], [36202, 38678, null], [38678, 40653, null], [40653, 43792, null], [43792, 45430, null], [45430, 48523, null], [48523, 51982, null], [51982, 55357, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2541, true], [2541, 5798, null], [5798, 9090, null], [9090, 12525, null], [12525, 14914, null], [14914, 17416, null], [17416, 20677, null], [20677, 23008, null], [23008, 25423, null], [25423, 28728, null], [28728, 30628, null], [30628, 33322, null], [33322, 36202, null], [36202, 38678, null], [38678, 40653, null], [40653, 43792, null], [43792, 45430, null], [45430, 48523, null], [48523, 51982, null], [51982, 55357, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55357, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55357, null]], "pdf_page_numbers": [[0, 2541, 1], [2541, 5798, 2], [5798, 9090, 3], [9090, 12525, 4], [12525, 14914, 5], [14914, 17416, 6], [17416, 20677, 7], [20677, 23008, 8], [23008, 25423, 9], [25423, 28728, 10], [28728, 30628, 11], [30628, 33322, 12], [33322, 36202, 13], [36202, 38678, 14], [38678, 40653, 15], [40653, 43792, 16], [43792, 45430, 17], [45430, 48523, 18], [48523, 51982, 19], [51982, 55357, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55357, 0.04972]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
24c419bc27bc8daeb96021a47f3624b62d149048
[REMOVED]
{"Source-Url": "https://perso.telecom-paristech.fr/apvrille/docs/SpringerCCIS_2018_Li_Genius_Apvrille.pdf", "len_cl100k_base": 8468, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 51481, "total-output-tokens": 12116, "length": "2e13", "weborganizer": {"__label__adult": 0.0006875991821289062, "__label__art_design": 0.0010547637939453125, "__label__crime_law": 0.0005450248718261719, "__label__education_jobs": 0.0010204315185546875, "__label__entertainment": 0.0001709461212158203, "__label__fashion_beauty": 0.0003273487091064453, "__label__finance_business": 0.0004701614379882813, "__label__food_dining": 0.0005555152893066406, "__label__games": 0.0017042160034179688, "__label__hardware": 0.0163116455078125, "__label__health": 0.0008635520935058594, "__label__history": 0.0007901191711425781, "__label__home_hobbies": 0.00020992755889892575, "__label__industrial": 0.0018815994262695312, "__label__literature": 0.000362396240234375, "__label__politics": 0.0005679130554199219, "__label__religion": 0.0011529922485351562, "__label__science_tech": 0.37060546875, "__label__social_life": 9.256601333618164e-05, "__label__software": 0.00726318359375, "__label__software_dev": 0.58984375, "__label__sports_fitness": 0.0005602836608886719, "__label__transportation": 0.0023250579833984375, "__label__travel": 0.0003690719604492187}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52107, 0.02611]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52107, 0.47371]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52107, 0.88596]], "google_gemma-3-12b-it_contains_pii": [[0, 3360, false], [3360, 7067, null], [7067, 10802, null], [10802, 14264, null], [14264, 16424, null], [16424, 19169, null], [19169, 22708, null], [22708, 23653, null], [23653, 26113, null], [26113, 26321, null], [26321, 28267, null], [28267, 30396, null], [30396, 30455, null], [30455, 32307, null], [32307, 34955, null], [34955, 35079, null], [35079, 36822, null], [36822, 37290, null], [37290, 41195, null], [41195, 41876, null], [41876, 44515, null], [44515, 48635, null], [48635, 52107, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3360, true], [3360, 7067, null], [7067, 10802, null], [10802, 14264, null], [14264, 16424, null], [16424, 19169, null], [19169, 22708, null], [22708, 23653, null], [23653, 26113, null], [26113, 26321, null], [26321, 28267, null], [28267, 30396, null], [30396, 30455, null], [30455, 32307, null], [32307, 34955, null], [34955, 35079, null], [35079, 36822, null], [36822, 37290, null], [37290, 41195, null], [41195, 41876, null], [41876, 44515, null], [44515, 48635, null], [48635, 52107, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52107, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52107, null]], "pdf_page_numbers": [[0, 3360, 1], [3360, 7067, 2], [7067, 10802, 3], [10802, 14264, 4], [14264, 16424, 5], [16424, 19169, 6], [19169, 22708, 7], [22708, 23653, 8], [23653, 26113, 9], [26113, 26321, 10], [26321, 28267, 11], [28267, 30396, 12], [30396, 30455, 13], [30455, 32307, 14], [32307, 34955, 15], [34955, 35079, 16], [35079, 36822, 17], [36822, 37290, 18], [37290, 41195, 19], [41195, 41876, 20], [41876, 44515, 21], [44515, 48635, 22], [48635, 52107, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52107, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
fa4023aff841acc36487e045f7a9ea6dea009d0c
[REMOVED]
{"Source-Url": "https://www.cmpe.boun.edu.tr/~cemgil/Courses/cmpe250/slides/Sorting.pdf", "len_cl100k_base": 9756, "olmocr-version": "0.1.50", "pdf-total-pages": 74, "total-fallback-pages": 0, "total-input-tokens": 108005, "total-output-tokens": 12525, "length": "2e13", "weborganizer": {"__label__adult": 0.00039076805114746094, "__label__art_design": 0.0003523826599121094, "__label__crime_law": 0.0003919601440429687, "__label__education_jobs": 0.0003936290740966797, "__label__entertainment": 8.034706115722656e-05, "__label__fashion_beauty": 0.0001533031463623047, "__label__finance_business": 0.00011914968490600586, "__label__food_dining": 0.0005664825439453125, "__label__games": 0.0018968582153320312, "__label__hardware": 0.0013742446899414062, "__label__health": 0.0003752708435058594, "__label__history": 0.0002894401550292969, "__label__home_hobbies": 0.00011026859283447266, "__label__industrial": 0.0003731250762939453, "__label__literature": 0.0002357959747314453, "__label__politics": 0.0002435445785522461, "__label__religion": 0.00041294097900390625, "__label__science_tech": 0.01800537109375, "__label__social_life": 6.210803985595703e-05, "__label__software": 0.004817962646484375, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0004184246063232422, "__label__transportation": 0.0005035400390625, "__label__travel": 0.00023472309112548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28764, 0.03832]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28764, 0.46878]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28764, 0.66642]], "google_gemma-3-12b-it_contains_pii": [[0, 37, false], [37, 475, null], [475, 929, null], [929, 1151, null], [1151, 1575, null], [1575, 1979, null], [1979, 2325, null], [2325, 3150, null], [3150, 3732, null], [3732, 3951, null], [3951, 4155, null], [4155, 4503, null], [4503, 4771, null], [4771, 5236, null], [5236, 5449, null], [5449, 6039, null], [6039, 7280, null], [7280, 7280, null], [7280, 7342, null], [7342, 7852, null], [7852, 8226, null], [8226, 8492, null], [8492, 8697, null], [8697, 9125, null], [9125, 9538, null], [9538, 10076, null], [10076, 10955, null], [10955, 11417, null], [11417, 12482, null], [12482, 13048, null], [13048, 13644, null], [13644, 14028, null], [14028, 14379, null], [14379, 14446, null], [14446, 14671, null], [14671, 15077, null], [15077, 15505, null], [15505, 15772, null], [15772, 15985, null], [15985, 16271, null], [16271, 16566, null], [16566, 17166, null], [17166, 17354, null], [17354, 17468, null], [17468, 18454, null], [18454, 18838, null], [18838, 19365, null], [19365, 20066, null], [20066, 20350, null], [20350, 20810, null], [20810, 20900, null], [20900, 20950, null], [20950, 21094, null], [21094, 21574, null], [21574, 21904, null], [21904, 22714, null], [22714, 23378, null], [23378, 23511, null], [23511, 23817, null], [23817, 24011, null], [24011, 24073, null], [24073, 24543, null], [24543, 24719, null], [24719, 24929, null], [24929, 25398, null], [25398, 25906, null], [25906, 26108, null], [26108, 26537, null], [26537, 26752, null], [26752, 27165, null], [27165, 27868, null], [27868, 28245, null], [28245, 28612, null], [28612, 28764, null]], "google_gemma-3-12b-it_is_public_document": [[0, 37, true], [37, 475, null], [475, 929, null], [929, 1151, null], [1151, 1575, null], [1575, 1979, null], [1979, 2325, null], [2325, 3150, null], [3150, 3732, null], [3732, 3951, null], [3951, 4155, null], [4155, 4503, null], [4503, 4771, null], [4771, 5236, null], [5236, 5449, null], [5449, 6039, null], [6039, 7280, null], [7280, 7280, null], [7280, 7342, null], [7342, 7852, null], [7852, 8226, null], [8226, 8492, null], [8492, 8697, null], [8697, 9125, null], [9125, 9538, null], [9538, 10076, null], [10076, 10955, null], [10955, 11417, null], [11417, 12482, null], [12482, 13048, null], [13048, 13644, null], [13644, 14028, null], [14028, 14379, null], [14379, 14446, null], [14446, 14671, null], [14671, 15077, null], [15077, 15505, null], [15505, 15772, null], [15772, 15985, null], [15985, 16271, null], [16271, 16566, null], [16566, 17166, null], [17166, 17354, null], [17354, 17468, null], [17468, 18454, null], [18454, 18838, null], [18838, 19365, null], [19365, 20066, null], [20066, 20350, null], [20350, 20810, null], [20810, 20900, null], [20900, 20950, null], [20950, 21094, null], [21094, 21574, null], [21574, 21904, null], [21904, 22714, null], [22714, 23378, null], [23378, 23511, null], [23511, 23817, null], [23817, 24011, null], [24011, 24073, null], [24073, 24543, null], [24543, 24719, null], [24719, 24929, null], [24929, 25398, null], [25398, 25906, null], [25906, 26108, null], [26108, 26537, null], [26537, 26752, null], [26752, 27165, null], [27165, 27868, null], [27868, 28245, null], [28245, 28612, null], [28612, 28764, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28764, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28764, null]], "pdf_page_numbers": [[0, 37, 1], [37, 475, 2], [475, 929, 3], [929, 1151, 4], [1151, 1575, 5], [1575, 1979, 6], [1979, 2325, 7], [2325, 3150, 8], [3150, 3732, 9], [3732, 3951, 10], [3951, 4155, 11], [4155, 4503, 12], [4503, 4771, 13], [4771, 5236, 14], [5236, 5449, 15], [5449, 6039, 16], [6039, 7280, 17], [7280, 7280, 18], [7280, 7342, 19], [7342, 7852, 20], [7852, 8226, 21], [8226, 8492, 22], [8492, 8697, 23], [8697, 9125, 24], [9125, 9538, 25], [9538, 10076, 26], [10076, 10955, 27], [10955, 11417, 28], [11417, 12482, 29], [12482, 13048, 30], [13048, 13644, 31], [13644, 14028, 32], [14028, 14379, 33], [14379, 14446, 34], [14446, 14671, 35], [14671, 15077, 36], [15077, 15505, 37], [15505, 15772, 38], [15772, 15985, 39], [15985, 16271, 40], [16271, 16566, 41], [16566, 17166, 42], [17166, 17354, 43], [17354, 17468, 44], [17468, 18454, 45], [18454, 18838, 46], [18838, 19365, 47], [19365, 20066, 48], [20066, 20350, 49], [20350, 20810, 50], [20810, 20900, 51], [20900, 20950, 52], [20950, 21094, 53], [21094, 21574, 54], [21574, 21904, 55], [21904, 22714, 56], [22714, 23378, 57], [23378, 23511, 58], [23511, 23817, 59], [23817, 24011, 60], [24011, 24073, 61], [24073, 24543, 62], [24543, 24719, 63], [24719, 24929, 64], [24929, 25398, 65], [25398, 25906, 66], [25906, 26108, 67], [26108, 26537, 68], [26537, 26752, 69], [26752, 27165, 70], [27165, 27868, 71], [27868, 28245, 72], [28245, 28612, 73], [28612, 28764, 74]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28764, 0.07219]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
3fd30bc9c71aaa88e360d4c36c06067d184c497b
**Abstract**—The performance of HDFS is critical to big data software stacks and has been at the forefront of recent efforts from the industry and the open source community. A key problem is the lack of flexibility in how data replication is performed. To address this problem, this paper presents Pfimbi, the first alternative to HDFS that supports both synchronous and flow-controlled asynchronous data replication. Pfimbi has numerous benefits: It accelerates jobs, exploits under-utilized storage I/O bandwidth, and supports hierarchical storage I/O bandwidth allocation policies. We demonstrate that for a job trace derived from a Facebook workload, Pfimbi improves the average job runtime by 18% and by up to 46% in the best case. We also demonstrate that flow control is crucial to fully exploiting the benefits of asynchronous replication; removing Pfimbi’s flow control mechanisms resulted in a 2.7x increase in job runtime. **I. INTRODUCTION** For years, developers have been constantly introducing new big data processing tools (e.g., Pig, Mahout, Hive, SparkR, GraphX, SparkSQL) into big data stacks such as Hadoop [1] and Spark [2] to address an increasing number of use cases. Figure 1 illustrates the Hadoop and the Spark ecosystems today. On this fast-changing landscape, the open-source Hadoop Distributed File System (HDFS) [3], which is modeled after the Google File System [4], has remained the preeminent distributed storage solution. Given the ubiquity of HDFS, a significant improvement to HDFS will have a sizable real-world impact. However, we observe that several recent efforts at improving HDFS are revealing an emerging need to handle data replication in HDFS more flexibly. As a first example, HDFS developers have recently added heterogeneous storage support [5] in HDFS. This addition allows HDFS to explicitly place one or more data copies on faster media (RAM Disk or SSD) for faster future data reads, but still leveraging slower and cheaper media (HDD) for maintaining backup copies. However, this feature offers no substantial performance improvement for data writes. The fundamental reason is that whenever data replication is enabled, HDFS writes synchronously through a pipeline of DataNodes to create copies at those DataNodes. Therefore, to ensure good performance, data blocks must be delivered to a DataNode in a timely manner. These periods are not necessarily correlated between different DataNodes. Therefore, to achieve this, Pfimbi has to overcome a challenge, namely that in real workloads storage I/O under-utilization is plentiful but individual intervals of under-utilization are often brief and interleaved with periods of peak utilization. Moreover, these periods are not necessarily correlated between different DataNodes. Therefore, to ensure good performance, data blocks must be delivered to a DataNode in a timely manner for the DataNode to take advantage of moments of storage I/O inactivity. However, these transmissions cannot be overly aggressive or they might overwhelm the DataNode and take away its ability to fully control usage of storage I/O bandwidth. These requirements imply that DataNodes must coordinate their actions. To provide this coordination, Pfimbi employs a protocol that is distributed and scalable, yet can achieve very high I/O utilization while avoiding interference. Pfimbi supports hierarchical flow control which enables highly flexible resource management policies. Asynchronous replication fundamentally changes how flexibly data blocks can be handled. While in synchronous replication every data block is a part of a rigid and continuous data stream, in asynchronous replication, each data block is discrete and stands alone. Therefore, a flow control mechanism can freely consider various attributes of a data block (e.g., job ID, task ID, replica number, block size, source DataNode ID, etc.) in making resource allocation decisions. However, the challenge lies in being able to flexibly express and enforce policies that involve multiple attributes. Pfimbi addresses this challenge by supporting a hierarchical model for flow control, where different attributes are used at different levels in the hierarchy to control resource allocation. Many natural policies can be expressed in this hierarchical manner. Pfimbi cleanly separates mechanisms and policies. The length of the synchronous portion of a pipeline can be set by users on a job by job basis. Therefore, users can choose whether replication is fully asynchronous, synchronous like it is in HDFS, or a hybrid of the two. The weights assigned to different replication flows can also be set individually, allowing users to dictate how jobs share the available bandwidth. This separation of mechanisms and policies makes Pfimbi flexible and extensible. Our experimental evaluation in Section VI shows that for a job trace derived from a Facebook workload, Pfimbi improves the job runtime by 18% on average, up to 46% for small jobs (writing under 1GB), and up to 28% for the large jobs (writing 80GB). Pfimbi improves the runtime of DFSIO, a Hadoop micro-benchmark, by 52% as compared to HDFS on a cluster with all HDDs, and finishes all replication work in a time similar to HDFS. When the first position of the replication pipelines is switched to SSD, the runtime improve- ment goes up to 73%. Finally, for a job running alongside the asynchronous replication of an earlier job, we observe a 2.7x increase in job duration if flow control is turned off, highlighting the importance and effectiveness of Pfimbi’s flow control mechanisms. The rest of this paper is organized as follows. In Section II, we review basic concepts in HDFS. In Section III, we motivate the feasibility and potential benefits of asynchronous replica- tion. The details of Pfimbi’s design are presented in Section IV and we discuss key properties of Pfimbi in Section V. Perform- ance of Pfimbi is evaluated in Section VI. Finally, we discuss related work in Section VII and conclude this paper. II. BACKGROUND A. Terminology For a job to complete, its output needs to be written once. We call this the primary copy (primary write). Data replication creates additional replicas. A client is application code that reads and writes to the file system. By synchronous replication we mean replication is on the critical path of client writes. The client write will not complete until replication is done. We use the term asynchronous replication to refer to data replication that is decoupled from client writes. We use the term “normal traffic” to refer to all reads and writes excluding asynchronous replication. B. HDFS architecture HDFS is used to distribute data over a cluster composed of commodity computer nodes. It uses a master-slave ar- chitecture. The master (called NameNode) handles metadata, answers client queries regarding data locations and directs clients to write data to suitable locations. The slave nodes (called DataNodes) handle the actual client read and write requests. Data is read and written at the granularity of blocks which are typically tens to hundreds of MB in size (64MB, 256MB are popular). Clusters often co-locate storage with computation such that the same set of nodes that run compute tasks also run HDFS. C. Synchronous pipelined writes in HDFS Block writes are performed in a synchronous pipelined fashion. Figure 2 presents in more detail the anatomy of a synchronous pipelined block write. This process is repeated for every new block. For every block write, the client contacts the master and receives a list of nodes that will host the block copies. The size of the list depends on the replication ... factor of the file (i.e., number of copies). Commonly, the first node in the list is the same node that executes the task writing the data. The client then organizes the provided nodes into a pipeline, ordering them to minimize the total network distance from the client to the last node in the pipeline [4]. Once the pipeline is setup up, the block’s data is sent over the pipeline at the granularity of application-level packets 3. When a node receives a packet from upstream, it forwards the packet downstream and writes the data locally. The last node in the pipeline, once it has successfully received the packet, generates an acknowledgment 4 which is then propagated through the pipeline, upstream, all the way to the client. A window-based scheme is used to limit the maximum number of un-acknowledged packets. A packet is considered successfully written after the client receives the corresponding acknowledgment. A block is considered successfully written after all its packets have been acknowledged. III. Motivation This section motivates the feasibility and potential benefits of asynchronous replication. First, we discuss the drawbacks of synchronous replication. Next, we discuss why asynchronous replication may still provide sufficient data locality for many jobs. Then, we highlight the fact that consistency can also be guaranteed by asynchronous replication. Finally, we show that disk I/O under-utilization can be frequently encountered which can be exploited to perform asynchronous replication. A. Drawbacks of synchronous replication Synchronous replication couples replication with primary writes, thus putting replication on the critical path of the writes. This design has important negative implications for both performance and resource efficiency. Synchronous replication causes contention between primary writes and replica writes even within a single job. This contention leads to a disproportional increase in job completion times. Take sorting 1TB of data on 30 nodes in Hadoop for example. We find that adding 1 replica causes a slowdown of 23%, and disproportionally, adding 2 replicas causes a slowdown of 65%. Synchronous replication can also lead to inefficient cluster-wide resource usage. Since replication prolongs task execution times, the more replicas are being created, the longer tasks hold on to their allocated CPU and memory resources. Overall cluster-wide job throughput can be increased if these resources are released promptly and allocated to other jobs sooner. Slow DataNodes (e.g., caused by a temporary overload) greatly compound the problems described. Since one node can serve multiple replication pipelines simultaneously, one single slow node can delay several tasks at the same time. Finally, synchronous replication increases a task’s exposure to network congestion. Any slow network transfer can also slow down the task. B. Many jobs can have good data locality without synchronous replication A task is said to have data locality if it executes on the same node from which it reads its input data. Data locality may improve a task’s runtime when reading input data locally is faster than reading it remotely. Synchronous replication can help improve data locality of a subsequent job by ensuring many replicas of such job’s input data are available when the job starts. This increases the probability that a node that is selected to run a task of the job also hosts an input block of that job. Data locality can also be obtained in the absence of synchronous replication. Many jobs that process large quantities of data naturally have their input blocks spread over a large portion of a cluster so their tasks will be data-local with high probability even without any replication. For these jobs, with respect to data locality, whether the input was replicated synchronously or asynchronously is unimportant. C. Consistency guarantees can be obtained with asynchronous replication A distributed file system requires that replicas of the same block are consistent with each other. HDFS allows a restricted set of operations on files – (1) only a single application can modify a file at any given time; (2) written data is immutable and new data can only be added at the end of a file. This restricted set of operations is powerful enough to satisfy the needs of the targeted jobs. Section V-A describes how consistency is guaranteed under both synchronous and asynchronous replication. For the two ways in which a file can be modified, write and append, we show that consistency is maintained for both reads following writes and writes following writes. D. Exploitable storage I/O under-utilization We now argue that disk I/O under-utilization can be frequently encountered. This presents an opportunity for performing efficient and timely asynchronous replication. We also analyze the pattern of disk I/O under-utilization as this directly impacts Pfimbi’s design. Disk I/O under-utilization is plentiful but irregular. This stems from fundamental job properties. Different jobs have different dominant resources and put different pressure on the storage subsystem. Some jobs are storage I/O bound (e.g., Terasort [7], NutchIndexing [8] and Bayesian Classification [9]) while others are CPU bound (e.g., Kmeans Clustering [10]). Even a single task may use the storage subsystem differently throughout its lifetime. For example, a Hadoop reducer is more storage I/O bound during the write phase compared with the shuffle phase if the reducer has been configured with enough memory. To illustrate the pattern of disk I/O under-utilization, we run the SWIM workload injector [11] with the first 100 jobs of a 2009 Facebook trace provided with SWIM on a 30-node cluster. We compute the average disk throughput (reads + writes) by reading OS disk activity counters every 100ms. Every one second we log a computed sample. Each node has one 2TB HDD used solely by the workload. Figure 3a shows a cluster-wide (all 30 nodes) view of the pattern of disk I/O utilization for a representative time window. Figure 3b illustrates the same pattern at the level of a single node. Figure 3a suggests that a significant part of the disk bandwidth is not utilized. Even when compute slots are actively being used, the disk can be idle. Such under-utilization can be exploited to opportunistically and asynchronously replicate data. Figure 3b shows the irregular pattern of disk activity for a single node. Periods of idleness or reduced activity are frequently interleaved with periods of peak activity. This observation suggests the need for a solution that quickly reacts to periods of under-utilization. IV. PFIMBI In this section, we describe the design and implementation of Pfimbi. To implement Pfimbi we augmented the HDFS DataNode with a module called the Replication Manager (RM). We use RM to refer to this per-node component implementing the Pfimbi design. A. Pfimbi design overview Two basic decisions have guided our design of Pfimbi. First, we must recognize the diverse needs of different jobs and allow jobs to choose flexibly between synchronous replication, asynchronous replication or a hybrid of the two. Second, we must design for scalability, and as such DataNodes should locally make all decisions related to flow control (e.g. deciding when to write data and which data to write). We do not allow centralized collection of statistics nor centralized flow control decision making due to scalability concerns. Figure 4 presents a logical view of the Replication Manager (RM) which is the component that implements Pfimbi’s new functionality. Arriving data is first de-multiplexed. The RM writes synchronous data to the file system immediately, but asynchronously received replication data is buffered in memory, in the buffer pictured in Figure 4. The Flow Regulator decides when to write the buffered data to disk, based on activity information reported by the Activity Monitor. The Communication Manager exchanges messages with other nodes. It receives block notifications and forwards them to the scheduler. When the Communication Manager detects free space in the local buffer, it invokes the scheduler to select one of the pending blocks and requests the upstream node to transfer it. The policy enforced by the scheduler dictates which block the DataNode requests. The main components of Pfimbi’s design can be separated into two groups, the ones that enable inter-node flow control vs. those for intra-node flow control. The intra-node flow control components deal with writing asynchronous replication data locally. The inter-node flow control components deal with inter-node communication and facilitate the forwarding of replication data between nodes. Section IV-B explains in detail how intra-node flow control works while Figure 5 and Section IV-C explain inter-node flow control. Fig. 4: Components of the Replication Manager in a Pfimbi node. B. Intra-node flow control This section describes how Pfimbi decides when to write buffered replication blocks to stable storage. The challenge is to monitor local IO in a way that permits the rapid and efficient use of disk under-utilization periods while minimizing the interference that replication causes to normal traffic. We start by discussing a number of intuitive approaches that we had to dismiss after thorough analysis and experimentation. Finally, we present the approach taken in Pfimbi. 1) Monitoring local IO - discarded alternatives: Our first alternative was to measure IO activity at the DFS level. The advantage is that a DataNode can differentiate synchronous writes from asynchronous writes based on the position of blocks in their pipelines. Unfortunately, this solution is oblivious to non-DFS writes and can lead to replication writes interfering with reducer and mapper spills, as well as mapper output writes. In Pfimbi, we want to avoid such interference. As a second alternative, we considered two disk-level metrics: aggregate disk throughput (reads + writes) and IO request latency. Replication writes are allowed to proceed whenever the current metric measurements drop below a predefined threshold. These two metrics are accurate and timely. However, they have weaknesses when used for flow control. First, disk-level metrics cannot differentiate between normal and asynchronous writes. When only asynchronous writes are in the system, they would still be throttled whenever activity is above the predefined threshold, leading to inefficient disk utilization. Second, picking a suitable threshold is difficult. Setting too high a threshold results in interference with normal data while a low threshold leads to decreased utilization. Lastly, when using disk throughput or IO latency, Pfimbi can only detect and react to idleness after it has occurred. The system repeatedly cycles between detecting idleness, writing more asynchronous data, pausing when activity is above the threshold, then detecting idleness again. Small delays between these steps add up, resulting in lower disk utilization. To avoid low utilization, we considered a third alternative: the sum between disk write throughput and the rate of change of the dirty data in the buffer cache. Using this aggregate metric, we can detect when the cache is being flushed without being replenished (i.e. the sum is zero). This case is important because it means the buffer cache is draining and the disk will become idle. We can, therefore, write asynchronous data before idleness occurs. However, when experimenting with this aggregate metric, we discovered that a lot of asynchronous data would build up in memory. The build up of asynchronous data in the buffer cache reduces the amount of free space for future normal writes to burst into, without blocking due to a full buffer cache. Furthermore, the cache churn rate fluctuates faster, and over a greater range than disk throughput thus making setting a threshold for the aggregate difficult. 2) Monitoring local IO - Pfimbi's approach: Pfimbi's approach is cache-centric. The idea is to allow the least amount of asynchronous replication data to be written to the cache that is necessary to ensure the disk is kept fully utilized. As a result, the impact of this least amount of asynchronous data in the cache on any normal writes is limited. Pfimbi tracks the amount of dirty data in the buffer cache using standard OS mechanisms and tries to maintain the level of dirty-data above the threshold $T$ at which the OS continuously flushes dirty data to disk (e.g., /proc/sys/vm/dirty_background_bytes in Linux). To make sure that the amount of dirty data never falls below $T$, Pfimbi aims to keep $T + \delta$ dirty bytes in memory. $\delta$ is set to be at least the maximum disk throughput multiplied by the monitoring interval. This guarantees that in-between Pfimbi's measurements, even if the buffer cache is drained at the maximum disk throughput, the amount of data will not fall below $T$ before Pfimbi can write more data. C. Inter-node flow control Section III-D showed that periods of disk under-utilization are often interleaved with periods of peak utilization. When disk under-utilization is detected, a node has to write asynchronous data with little delay. Triggering a remote network read after detecting disk under-utilization incurs much too high a latency to be viable. Instead, Pfimbi achieves fast reaction by having a number of asynchronous blocks buffered and ready in memory. The RM controls the flow of asynchronous replication blocks to ensure that receiver side buffers, as shown in Figure 4, are always replenished. Pfimbi uses a credit-based flow control scheme. The receiver notifies the sender when there is space in the buffer to receive a block. Only when such a notification is received will a sender start a block transfer. However, a receiver does not initially know which blocks are destined for it. The Communication Manager in Figure 4 uses Remote Procedure Calls (RPCs) to let senders notify downstream nodes of blocks destined for them. Figure 5 demonstrates the use of flow control for replication in Pfimbi. This example assumes that two replicas (A and B) are written synchronously and two asynchronously (C and D). The client sends the data to node A which forwards it synchronously to node B. Node B is the last node in the synchronous pipeline so it sends acknowledgments which are propagated upstream to the client. The client’s write operation completes when all the data it sent is acknowledged by nodes A and B. As they receive data, nodes A and B write the data locally. This data is initially absorbed by the local OS buffer cache and finally ends up in stable storage. After receiving a data block and writing it locally, node B notifies node C that it has a block available for it. Node C has plenty of room in its buffer so it immediately requests the block and then begins receiving it. Since node C is on the asynchronous part of the pipeline it will treat the incoming block as an asynchronous block. After receiving the block and writing it locally it notifies node D that it has a block available for it. Unfortunately, node D has no room in its buffer at this point. It will first have to write some of the buffered blocks locally to disk to make room for other blocks. Only then will it request the block from node C. The algorithm then applies the smallest start time first policy as well as all ancestor classes' virtual queues in the hierarchy. When an event reaches the head of the queue, it is assigned a virtual start time \( s_k \). The scheduling algorithm recursively decides from which position class, and then from which job to initiate the next block reception. The algorithm maintains a system virtual time function \( v^s(\cdot) \) for each internal class in the hierarchy. When a block reference to the \( k \)-th block of queue \( i \) at the bottom level reaches the head of the queue, it is assigned a virtual start time \( s^i_k \) and a virtual finish time \( f^i_k \) at the bottom level queue as well as all ancestor classes’ virtual queues in the hierarchy. The algorithm then applies the smallest start time first policy recursively down the hierarchy to choose the block to initiate the transfer for. The system virtual time function is given by \( v^s(\cdot) = (s_{\text{min}} + s_{\text{max}})/2 \), where \( s_{\text{min}} \) and \( s_{\text{max}} \) are the minimum and maximum start times among all active head of queue blocks under a class. This ensures that the discrepancy between the virtual times of any two active queues is bounded [13]. Furthermore, \( s^i_k = \max(v^s, f^{i-1}) \), and \( f^i_k = s^i_k + \frac{l^i_k}{w_i} \), where \( l^i_k \) is the length of the block. This example illustrates how Pfimbi can enforce weighted bandwidth sharing between jobs, and also between replicas using only local decisions. This obviates the need for heavy centralized collection of statistics, and coordination of nodes. 2) **Pfimbi can guarantee a bandwidth share for asynchronous replication:** For many workloads, there is enough idleness for pending replication work to be performed. However, there may exist some I/O intensive workloads that keep the disk busy for extended intervals. This would result in replication being throttled indefinitely. To address this case, Pfimbi allows a share of the bandwidth to be guaranteed for asynchronous replication by using a weighted round robin approach. That is, after receiving a predefined number of synchronous blocks, Pfimbi can flush an asynchronous block regardless of the current activity to ensure asynchronous replication is not starved. **V. DISCUSSION** **A. Consistency** In this section, we argue that Pfimbi maintains the same consistency guarantees as HDFS. We use the same read consistency definition as in the HDFS append design document: a byte read at one DataNode can also be read at the same moment in time at any other DataNode with a replica [14]. We define write consistency as: writes proceed in the same order at all replicas. We separately discuss regular writes and appends. The difference is that a regular write creates a new file while an append adds data to an existing file. For reads, writes, and appends, we discuss how Pfimbi maintains consistency for both synchronous and asynchronous pipelines. The mechanisms used to ensure consistency leverage properties of ![Inter-node flow control in Pfimbi](image-url) the write and append operations: (1) a file can only have one client adding data to it (writing or appending); (2) written data is immutable; (3) new content can only be added at the end of a file which implies that only the last block in a file can be under modification at any given time. This means for writes and appends, we only need to focus on the last block in a file. Definitions: For the current block at node $i$, let $R_i$ be the number of bytes received, and $A_i$ be the number of bytes that have been acknowledged by downstream nodes to node $i$. If no synchronous node follows node $i$ then $A_i = R_i$. A generation stamp is the version number for a block. A block is first assigned a generation stamp when it is created, and before a block is appended to, its generation stamp is updated. 1) Synchronous pipelines: Read consistency after a write or an append: Data flows down the pipeline while acknowledgements go upstream meaning $R^[1] \geq R^[2] \geq \ldots \geq R^[N-1] \geq A^[N-1] \geq \ldots \geq A^[1] \geq A^[0]$. This implies that the following condition always holds: $$C1: \max_{i} (A^[i]) \leq \min_{i} (R^[i]).$$ When a read operation starts, the reader checks how many bytes it is allowed to read. The reader is allowed to read up to $A^i$ bytes, where $i \in \{0, 1, \ldots, N-1\}$. Because of $C1$, all $A^i$ bytes will have been received at (and can be served from) all other nodes in the pipeline. This guarantees read consistency, in that a byte read at one DataNode can also be read at the same moment in time at any other DataNode with a replica. Appends are similar to writes in how bytes become visible to readers. Write and append consistency: HDFS only allows one client to be writing or appending to a file at a time. This is enforced centrally at the NameNode. Having a single writer or appender ensures that writes from different clients are properly ordered. 2) Asynchronous pipelines: Though Pfimbi changes when replicas are created, the following two principles ensure Pfimbi maintains both read and write consistency as defined above. P1: Pfimbi does not make incomplete asynchronously forwarded replicas visible to clients. P2: When an append operation overlaps with asynchronous replication, Pfimbi aborts ongoing forwarding of replicas for the current block being created asynchronously, and only restarts it after the append operation is complete. Read consistency after a write: For nodes in the asynchronous portion of a pipeline $A^i = R^i$. When a block is in the process of being forwarded asynchronously, the number of bytes acknowledged, $A^i$, at the upstream nodes will be larger than the bytes received, $R^i$, at the downstream node because Pfimbi only asynchronously forwards a block after it has been completed at the upstream node. This violates the condition $C1$ we used above. However, we can invoke $P1$ to ensure that the replicas where the condition is violated are not visible. When a block is being forwarded, the replica being created at the destination is in a temporary state. Such a replica is only made visible to the NameNode when forwarding is complete. When forwarding is complete, $R^i$ at the downstream node becomes equal to $A^i$ at the upstream node. This guarantees that all visible replicas contain at least $\max_{i} (A^[i])$ bytes so $C1$ holds for all visible replicas. Read consistency after an append: When an append operation starts, data is appended synchronously to all currently completed replicas of the block. So if a block has been fully replicated (whether synchronously or asynchronously) the append proceeds as it does in HDFS. If the block is only partially replicated, we invoke $P2$. When a node starts servicing an append request it aborts ongoing block forwarding for the block being appended to. The downstream node will delete partially created files in the local file system. The node servicing the append also disregards requests to asynchronously forward the block. The append operation then synchronously adds data to all currently completed (and thus visible) replicas, guaranteeing that the appended data can be read from all of them and so maintaining condition $C1$. After the append finishes, a node can restart asynchronous block forwarding. Subsequently forwarded replicas will have post-append data and the updated generation stamp. When the append starts after a DataNode completes forwarding an asynchronous replica, but before that new replica is visible at the NameNode, the new replica will not be included in the append pipeline. If this replica becomes visible to clients, $C1$ would be violated. The use of generation stamps prevents this from happening. When the downstream DataNode notifies the NameNode of the replica’s completion, the NameNode will check if the generation stamp of the replica is equal to the one the NameNode holds. In this case, it will not be, so the pre-append replica will not be added to the NameNode’s map, and therefore will not become visible to clients. The NameNode also instructs the DataNode to delete this replica. Guaranteeing a single writer: HDFS guarantees only one client is writing or appending to a file by issuing leases at the NameNode. In addition to the guarantees provided by leases, we invoke $P2$ to ensure that asynchronous forwarding is never concurrent with a client’s append. B. Failure handling Failure handling in Pfimbi requires no new mechanisms on top of those already used in HDFS and the big data frameworks running on top of it. The only difference is when these mechanisms need to be invoked. If the node hosting the client (i.e., the task that writes) fails, then the pipelined write stops. Tasks failures are handled by the computation framework (Hadoop, Spark). A new copy of the task is typically launched on another node. If the node that fails is not at the first position in the pipeline but it is within the synchronous segment of the pipeline, then the pipelined write also stops. In this case the client times out the write and selects another pipeline to retry. If the node that fails is within the asynchronous part of the pipeline, then the asynchronous pipeline will be severed at that node. All synchronous or asynchronous copies upstream of the failed node can complete normally. The node immediately upstream of the failed node also continues normally. If this upstream node sends a block notification to the failed node, then it will receive an error and stop retrying. If the block notification went through before the failure but there is no block request arriving from the failed node, the upstream node is still unaffected. This is because the upstream node does not maintain any extra state after sending the block notification since the remaining data transfer is entirely driven by the downstream node. However, in this failure scenario, if no further action is taken, the data will remain under-replicated. To get the desired number of replicas, Pfimbi falls back to the block-loss recovery mechanisms already supported by the master node. The master node periodically checks for under-replicated blocks within the file system and starts their replication in an out-of-band manner until the desired number of replicas is reached. Lastly, all copies of a block could be lost after the job writing the block has finished. Recovery from such failures entails re-starting previously finished jobs. This requires tracing data lineage across jobs. Such mechanisms can be found in Tachyon [15], Spark [16] and in RCMP [17] for Hadoop. C. Scalability Pfimbi is just as scalable as HDFS and can leverage all scalability enhancements for HDFS (e.g. distributed master nodes) because the overall file system architecture remains the same. Importantly, the centralized master performs the exact set and number of operations. Thus, the load on the master remains unchanged. Pfimbi also retains the pipelined design to replication. Pfimbi only changes the manner in which the pipeline is managed. Finally, the coordination mechanism introduced by Pfimbi to enable flow control is lightweight and local to a pair of upstream-downstream nodes. Thus, the coordination mechanisms in Pfimbi scale with the size of the DFS. D. Choosing between synchronous and asynchronous replication Pfimbi leaves it to the application programmer to decide whether to use synchronous replication, asynchronous replication or a hybrid of the two. Each job chooses whether to use synchronous or asynchronous replication as part of its configuration. The application programmer can include this setting in the source code, or as a simple command line parameter when launching the job. The number of DataNodes in the synchronous portion of the pipeline is similarly specified as part of the job configuration. Automating this decision is beyond the scope of this work, and is left to future work. VI. Evaluation In this section, we report experimental results to quantify the performance of Pfimbi. Software setup: We have implemented Pfimbi by extending Hadoop 3.0.0. We compare against default Hadoop 3.0.0. We run Hadoop and Pfimbi on top of the YARN resource allocator. Hardware setup: We use 30 worker nodes and one master node. The worker nodes host HDFS DataNodes and YARN NodeManagers. Hence, computation is collocated with the data storage. The master and worker nodes have the same hardware configuration. Each node has two 8-core AMD Opteron 6212 CPUs and a 10GbE connection. Nodes have 128GB of RAM, a 200GB SSD and a 2TB hard disk drive. There are no other jobs running on the nodes during our experiments. Configurations: We represent the tested configurations using the format DFS(#copies, #synchronous copies). Pfimbi(3,1) means using Pfimbi to write 3 copies, with only 1, the primary, being created synchronously. Two replicas are created asynchronously. HDFS(3,3) means using HDFS to create 3 copies, all synchronously. For HDFS, the two numbers will always be the same since HDFS uses synchronous replication. Default Pfimbi parameters: The per-node buffer used for storing asynchronous replication blocks can hold 16 block. Replication flows use a flow weight of 1.0, unless otherwise specified, and by default, we prioritize replicas that are at earlier pipeline positions. Metrics: We use four metrics to measure the performance of Pfimbi: - Job runtime: the time it takes a job from start until it finishes all synchronous copies. - The time between job start and the completion of all first replicas. This gives resilience against any single node failure. - The time between job start and the completion of all replication work. - The number of block write completions each second. Job runtime measures job performance whilst the other metrics give information about the efficiency of the system. A job reports completion to the user after it finishes writing all synchronous copies. In HDFS, all copies are created syn- chronously, so the time to write the primary will be identical to the time taken to write all the copies. However, when using Pfimbi these metrics will be different. This is why we consider them separately. Workload: We use three workloads to test our system: a SWIM workload derived from a Facebook trace [18], Sort jobs and DFSIO jobs [19]. SWIM [11], [20] is a MapReduce workload generator. Given an input trace, SWIM synthesizes a workload with similar characteristics (input, shuffle and output data size, job arrival pattern). The SWIM workload is derived from a 2009 Facebook trace [18] and contains the first 100 jobs generated by SWIM. Figure 6 illustrates the distribution of job input and output sizes in this workload. Most jobs are small and read and write little data while the few large jobs write most of the data. Such a heterogeneous workload is popular across multiple data center operators [21], [22]. Including replication, roughly 900GB of data is written into the DFS. In our experiments we gradually scale up the workload by up to $8 \times$ (i.e., 7TB). This allows us to observe the behaviour of Pfimbi under light and heavy load conditions. To analyze Pfimbi’s benefits in a more controlled environment, we use Sort jobs. In all our Sort experiments, we sort 1TB of randomly generated data. Sort experiments are repeated 3 times for each configuration. Lastly, we use DFSIO [19] as a pure write intensive workload. DFSIO is routinely used to stress-test Hadoop clusters. Unlike Sort, DFSIO is storage I/O bound for its entire duration, and its only writes are to the DFS. This enables us to analyze the behavior of DFS writes in isolation from the non-DFS disk writes (task spills, mapper writes) encountered in other Hadoop jobs. A. Pfimbi improves job runtime We start by analyzing Pfimbi’s benefits on an I/O intensive workload by running a DFSIO job. We also analyze the ability of HDFS and Pfimbi to leverage heterogeneous storage by varying the location of primary writes (SSD or HDD). The two replicas always go to HDDs. Thus, we analyze the following pipeline configurations: SSD→HDD→HDD and HDD→HDD→HDD. Since in the SSD case the primary writes do not conflict with replica writes, for a fair comparison with the HDD case, we partition the nodes into two groups. Half of the nodes handle the replicas while the other half handle the primary writes and the tasks. Partitioning is only necessary for this experiment. For all our other experiments, we do not partition our nodes. Figure 7 illustrates the results for the two DFSIO jobs. Pfimbi significantly lowers the completion time for primary writes whilst maintaining high disk utilization. Pfimbi improves job runtime by 52% and 73% for the HDD→HDD→HDD and SSD→HDD→HDD configurations, respectively. The time to obtain the first replica is also reduced by 32% for both configurations. These large gains are obtained with only a 5% penalty on the time it takes for all data to be written to the non-volatile storage. The small bar above the 2nd replica shows the time it takes for all dirty data to be flushed to non-volatile storage after an operating system sync call. The bar is much smaller for Pfimbi, since Pfimbi restricts the amount of asynchronous data that can be in the buffer cache. In Figure 7a, HDFS(3,3) cannot benefit from moving the first copy write to SSD because it is using synchronous replication. The pipelines are bottlenecked by the slower HDDs. With Pfimbi, the primary write duration improves by 44% when we move from the HDD→HDD→HDD to the SSD→HDD→HDD configuration. Pfimbi is better able to exploit storage heterogeneity. We ran the SWIM workload to evaluate Pfimbi’s benefits under different load conditions. The load on the storage subsystem is varied by scaling up the workload data size. Figure 8 shows the results. It plots the average job runtime under Pfimbi(3,1) normalized to HDFS(3,3). Under a heavy workload (8x scaling) Pfimbi shows an 18% improvement in average job runtime. The per-job improvements (not illustrated) are up to 46% for small jobs (writing less than 1GB), and up to 28% for the most data intensive jobs (writing 80GB). We also ran lighter workloads (2x and 4x scaling). Pfimbi does not show significant benefits for these. When the workload is light there is very little contention caused by replication so there is little room for optimization. Pfimbi shows improvements also when a Sort job is run in isolation. Figure 9a shows that as we decrease the length of the synchronous portion of the pipeline, the job runtime decreases. The runtime of Pfimbi(3,1), which is purely asynchronous is 36% less than HDFS(3,3), which is purely synchronous. This improvement is because Pfimbi reduces contention between normal traffic and asynchronous traffic. We next analyze how efficiently Pfimbi performs replication. We use the single Sort job to decouple the results from the influence of concurrent jobs. Figure 9b is CDF showing the proportion of replicas created with time. We measure time from when the first block of the Sort job is written. Pfimbi completes replication just as quickly as HDFS, all whilst reducing the duration of the write phase of the job by almost Fig. 7: Completion time of different write stages for a DFSIO job. Primary writes go to either HDD or SSD. (a) HDFS(3,3) cannot benefit from SSDs because of synchronous replication. (b) Pfimbi finishes primary writes much faster due to the decoupled design and also benefits from switching to SSD. The small bar above the 2nd replica shows the time it takes for all dirty data to be flushed to disk after a sync call. Fig. 9: Sort job under HDFS and Pfimbi. (a) Pfimbi improves Sort runtime. (b) The primary writes (black) and first replicas (blue) complete faster in Pfimbi. The second replicas (green) complete on par with HDFS(3,3). B. Pfimbi isolates primary writes from asynchronous replication Asynchronous replication without flow control can be achieved using the setRep mechanism in HDFS. After a file has been written, the setRep command is invoked to increase the replication factor of the file’s blocks. In Figure 10, we can see that such asynchronous replication without flow control causes the second of two back to back DFSIO jobs to run 2.7x slower than under Pfimbi. Flow control in Pfimbi minimizes the interference that the asynchronous replication data for the first job has on the runtime of the second. The duration of first job should be the same under both Pfimbi and HDFS. The slight difference in Figure 10 is within the expected natural variation between multiple runs of the same job. C. Pfimbi can divide bandwidth flexibly Flexible dividing disk bandwidth between jobs: We ran three concurrent DFSIO jobs and varied their flow weights. The third job is launched 500 seconds after the first two. Figures 11a and 11b show the rate of block completions for the three jobs. Time point zero second on the plot is when the first block is written. Each job writes 600GB of data. For this experiment, we fairly share bandwidth between replicas at different positions in pipelines. This enables us to clearly see the effects of dividing bandwidth between jobs. In Figure 11a the flow weights are equal, so the three jobs’ Fig. 11: Three DFSIO jobs are run concurrently with different replication flow weights. Each job has a replication factor of 3. We measure time from when the jobs start writing blocks. Job 3 is started 500 seconds after the first two. When flow weights are different (in (b)) we observe bandwidth sharing at proportions according to the ratio of the flow weights, thus Job 3 is able to achieve failure resilience much sooner. Fig. 10: Pfimbi(3,1) vs. HDFS(1,1)+setRep(3) for two back to back DFSIO jobs. setRep is called immediately after a job is completed. Asynchronous replication by itself (in setRep) is not enough to obtain large performance benefits. Without flow control setRep’s asynchronous replication severely interferes with the second job. Fig. 12: shows the number of block completions versus time for replicas at different positions in pipelines. Similar to Figure 11 time point zero second on the plot is when the first block is written. In Figure 12a we do not give priority to earlier positioned replicas, and we observe the 2nd replicas and 3rd replicas being created at the same rate as the 1st replicas. A user may prefer all the 1st replicas to be given priority. In Figure 12b, we set Pfimbi to give priority to replicas at earlier positions in the pipelines when selecting the block to be received next. We set a ratio of 100:10:1. This reduces the overlap between the writes of blocks at different positions in the pipeline, and replicas at earlier positions are finished sooner. VII. RELATED WORK In Section I, two related works [5][6] by HDFS developers have already been mentioned. The rest of this section will discuss additional related work in the literature. Sinbad [23] addresses network contention as a potential performance bottleneck in DFSes. Sinbad leverages the flexibility of big-data jobs in choosing the locations for their replicas. Sinbad chooses replica locations to reduce the risk of network hot-spots. Sinbad and Pfimbi are highly complementary. The block placement strategy in Sinbad and Pfimbi’s flexibility to perform flow-controlled asynchronous replication can potentially be applied simultaneously to achieve the best of both worlds. TidyFS [24] is a simple distributed file system. It performs by default lazy, asynchronous replication. To motivate TidyFS, the authors find that in a Microsoft cluster, less than 1% of the data is read within the first minute after creation, and not more than 10% within the first hour after creation. These data access patterns also provide additional motivation for Pfimbi. TidyFS fundamentally differs from Pfimbi in that there is no management of asynchronous replication traffic. Once an asynchronous replication thread finds replicas to be created, it starts creating them immediately regardless of the system. load, similar to the setRep trick we considered in Section VI-B. Thus, in TidyFS, local clients writes will experience contention from asynchronous replication traffic leading to poor performance as Section VI-B shows. In contrast, Pfimbi uses flow control techniques to allow replication to be performed during periods of disk under-utilization, thus minimizing interference. Pfimbi’s flow control is distributed and its flow control decisions are made locally at each DataNode. Intelligence is also distributed since each DataNode runs its own hierarchical block scheduler to implement different resource sharing policies. This design enables Pfimbi to make fine-grained flow control decisions to exploit momentary IO under-utilization. In contrast, Retro [25] is a centralized resource management framework. In the case of HDFS, Retro would centrally compute and set the data rates of different synchronous replication pipelines to implement a resource sharing policy. The intelligence is centralized in Retro, DataNodes simply perform data rate control as instructed. It would be impractical to use Retro to implement the fine-grained flow control that is necessary to enable efficient asynchronous replication. Parallel replication streams sharing a single source has been proposed as an alternative to pipelined replication to reduce write latency [26]. However, parallel replication does not address the overhead due to contention between replication and normal data. In reality, the overhead of I/O contention either in the network or on the disk can have a much larger effect on job performance than the overhead of pipeline latency. VIII. CONCLUSION Over the past five years, since its initial release, HDFS has continued to gain adoption. Many new features and optimizations have since been introduced, but until recently, little has changed about how data replication is handled. The nascent effort on adding in-memory storage with lazy-persist to HDFS has highlighted a need for a more flexible approach to data replication that does not sacrifice performance. In this paper, we have explored the idea of asynchronous replication within a design called Pfimbi. Our key contribution is that we have demonstrated that asynchronous replication when combined carefully with flow control mechanisms can provide very substantial improvement in job runtime over a range of workloads and heterogeneous storage configurations. Moreover, the flow-control mechanisms in Pfimbi can be used to realize a rich set of IO resource sharing policies. Finally, Pfimbi is readily deployable in existing big data software ecosystems. ACKNOWLEDGEMENT We would like to thank the anonymous reviewers for their thoughtful feedback. This research is sponsored by the NSF under CNS-1422925, CNS-1305379 and CNS-1162270, an IBM Faculty Award, and by Microsoft Corp. Simbarashe Dzianamirira is also supported by a 2015/16 Schlumberger Graduate Fellowship. REFERENCES
{"Source-Url": "http://www.cs.rice.edu/~eugeneng/papers/MSST2016.pdf", "len_cl100k_base": 10457, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43382, "total-output-tokens": 12450, "length": "2e13", "weborganizer": {"__label__adult": 0.0002944469451904297, "__label__art_design": 0.0004401206970214844, "__label__crime_law": 0.0003185272216796875, "__label__education_jobs": 0.0008058547973632812, "__label__entertainment": 0.00016236305236816406, "__label__fashion_beauty": 0.00016582012176513672, "__label__finance_business": 0.0005502700805664062, "__label__food_dining": 0.0003314018249511719, "__label__games": 0.0007867813110351562, "__label__hardware": 0.002681732177734375, "__label__health": 0.0004787445068359375, "__label__history": 0.000370025634765625, "__label__home_hobbies": 0.00012505054473876953, "__label__industrial": 0.00054168701171875, "__label__literature": 0.0002658367156982422, "__label__politics": 0.00030493736267089844, "__label__religion": 0.0004265308380126953, "__label__science_tech": 0.262451171875, "__label__social_life": 0.00012409687042236328, "__label__software": 0.052276611328125, "__label__software_dev": 0.67529296875, "__label__sports_fitness": 0.0002701282501220703, "__label__transportation": 0.0005087852478027344, "__label__travel": 0.0002282857894897461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54033, 0.01117]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54033, 0.38283]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54033, 0.9174]], "google_gemma-3-12b-it_contains_pii": [[0, 3771, false], [3771, 7691, null], [7691, 13839, null], [13839, 16643, null], [16643, 23086, null], [23086, 26203, null], [26203, 32176, null], [32176, 37166, null], [37166, 42389, null], [42389, 44438, null], [44438, 47251, null], [47251, 51270, null], [51270, 54033, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3771, true], [3771, 7691, null], [7691, 13839, null], [13839, 16643, null], [16643, 23086, null], [23086, 26203, null], [26203, 32176, null], [32176, 37166, null], [37166, 42389, null], [42389, 44438, null], [44438, 47251, null], [47251, 51270, null], [51270, 54033, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54033, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54033, null]], "pdf_page_numbers": [[0, 3771, 1], [3771, 7691, 2], [7691, 13839, 3], [13839, 16643, 4], [16643, 23086, 5], [23086, 26203, 6], [26203, 32176, 7], [32176, 37166, 8], [37166, 42389, 9], [42389, 44438, 10], [44438, 47251, 11], [47251, 51270, 12], [51270, 54033, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54033, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ca664f11b005633e109ef6cd43b03697a185efeb
The Progress ODBC FAQ (Final Version 2.1 – 12/22/98) Geoff Crawford Innov8 Computer Solutions, LLC. Phone: (973) 361-4224 FAX: (973) 537-6946 E-Mail: info@innov8cs.com # Table Of Contents What is ODBC? ................................................................. 3 ODBC Basics ................................................................. 3 What Are The Different ODBC Versions .......................... 3 ODBC Drivers Available For Progress .............................. 4 - Version 6 .......................................................... 4 - Version 7 .......................................................... 4 - Version 8 .......................................................... 4 - Version 9 .......................................................... 4 Architectures ................................................................. 5 - Single Tier ....................................................... 5 - Multiple Tier .................................................... 5 Setting Up The Drivers .................................................... 6 - OpenLink ......................................................... 6 - OpenLink Lite 16 Bit ....................................... 6 - OpenLink 32 Bit (Small Client and Large Client) .. 6 - OpenLink ODBC (MT) ...................................... 8 - Intersolv ......................................................... 10 - Intersolv DataDirect ODBC ......................... 10 - Intersolv Version 3.01/3.10 ......................... 11 - SCO SQL Retriever ............................................. 11 - Progress ......................................................... 12 - OID/OIB Setup ................................................. 12 - ODBC Application Compatibility ....................... 14 - Progress Version Compatibility ......................... 16 Database Triggers and ODBC ........................................... 17 Special Notes On ODBC 3.0 ........................................... 17 Some Miscellaneous Comments ...................................... 17 The ODBC DataServer .................................................. 19 JDBC and ODBC to JDBC Bridges ................................. 19 Closing Note .............................................................. 19 Resource List ............................................................ 20 - Books ........................................................... 20 - Web Sites ....................................................... 20 What is ODBC? ODBC stands for Open Database Connect and is a Microsoft specification. ODBC provides a standard way of defining data sources and their data access methods. ODBC is designed around SQL and relational databases, but there is nothing preventing translation of the incoming SQL to another language. The ODBC specification defines low level API calls that any application can make use of for database queries. By writing calls to the API, a reporting writer or other tool can portably access heterogeneous data sources with one set of source code. ODBC Basics From a user’s standpoint, all you need to know is that your report writer, or other tool, has been written to make use of the ODBC specification. From there, you need to select and install a Progress ODBC driver. Possible drivers are listed below, along with some notes about architecture and installation. After your driver is installed, you must define each of your Progress databases as a valid ODBC data source. This is done with the standard Microsoft ODBC administrator program. What Are The Different ODBC Versions The ODBC specification is currently up to version 3.0. With each different version, new functionality is added. Some of the new specification deals with threading and other behind the scenes make up of the driver, other parts like the new tracing capability are more visible. ODBC Drivers Available For Progress Several companies have made drivers for Progress databases, including Intersolv, OpenLink, and Progress themselves. Drivers are specific to the version of Progress: **Version 6** - Esker Tun SQL - Intersolv DataDirect ODBC Version 2.1 - OpenLink ODBC (MT) - Progress **Version 7** - Esker Tun SQL - Intersolv DataDirect ODBC Version 2.5 (16 bit and 32 bit) * - Intersolv DataDirect SQLLink (In Beta as of 7/97) - OpenLink ODBC (MT) - OpenLink Lite (Small Client, and Large Client) * - SCO SQL Retriever **Version 8** - Esker Tun SQL - Intersolv DataDirect ODBC Versions 2.5 (16 Bit and 32 Bit) * - Intersolv DataDirect ODBC Version 3.0 (32 Bit) - OpenLink ODBC (MT) - OpenLink Lite (Small Client, and Large Client) * Note: * - The 16 bit versions seem to be no longer available. The 32 bit versions work with their respective versions of the server and with the 32 Bit ESQL that comes with V8 Client/Networking. Version 7 databases are supported, but Version 7 Client/Networking is not specifically supported. It can be made to work, see notes later in this document. **Version 9** - Progress Architectures There are two basic architectures employed by the driver makers: single vs. multiple tier. Intersolv’s DataDirect ODBC, OpenLink Lite, and the Progress ODCB Driver are single tier drivers, while the rest are all multiple tier. Single Tier Single tier architectures use the driver itself to process the SQL query, implying PC side resolution. The driver connects to the database, sends SQL to the database, does any additional record selection or joining, and then passes the result to the application. Driver connections for Progress require a client product such as Client Networking, 4GL Client, or ProVision to connect to the database, or use its own network protocol for a remote database. The Progress client is responsible for getting the records to the driver where it does the rest of the work. Starting in Version 8, executables separate from the full Client Networking product are shipped for establishing this connection. This smaller client is referred to as the Open Interface Driver and is combined with the Open Interface Broker for multi-user situations. Multiple Tier Queries are offloaded by the driver to another application in multiple tier architectures. This secondary layer is generally a networking program talks to a server side component. The server receives SQL requests from multiple network connections, resolves the request through interaction with the database, and returns the data to the PC’s secondary layer. The secondary program must still pass the final results to the driver. While it is not required, almost all multiple tier implementations make direction connections from the server to the database. The server side execution generally provides better performance since only selected records get passed to the client PC. Under Progress Client Networking, records are sometimes passed to the PC for selection, increasing network traffic. The specific circumstances where this happens are version specific, but joins for example, are determined by the client PC under all current versions of Progress. Setting Up The Drivers OpenLink OpenLink’s web site contains pages detailing installation instructions for all of their drivers, including screen shots. Instructions on Progress program setup (OID/OIB setup for example) or ODBC Admin installation is not included. See http://www.openlinksw.com/support/install/proltins.htm for all versions of the Lite Drivers, and http://www.openlinksw.com/support/install/unixserv.htm for the MT Driver server installation. Be careful of all of OpenLink’s Lite uninstall programs. Some versions appear to entirely wipe out the datasource list, others seem to remove nothing at all. (Progress datasources, DLL’s, icons, etc. are all left alone) OpenLink Lite 16 Bit Currently, the instructions are incomplete as the driver installation failed. At the current time this driver also appears to be discontinued. OpenLink Lite assumes the standard Windows 16 bit Version 2.0 ODBC administration has already been set up. If not, copy into the Windows “SYSTEM” directory ODBC.DLL, ODBSCINST.DLL and ODBCADMIN.EXE from the “Winfiles” subdirectory of the Progress DLC directory. Under Windows NT, this will need to be done even if 32 bit ODBC administration has been installed. Since the Lite Drivers are 16 bit, do not copy the files into \WINNT\SYSTEM32, but \WINNT\STSTEM. Next run the SETUP.EXE and follow the prompts. A check will be done to ensure the 16 bit ODBC files have been installed. Run the ODBC setup program from the control panel, and choose Add from the list of Data Sources. Select OpenLink Lite 16 Progress from the list of installed drivers. This is the point where the drivers do not appear to be properly installed. OpenLink 32 Bit (Small Client and Large Client) There are two versions of the 32 bit driver, the Small and Large Clients. The Small Client uses the OID/OIB setup to talk to Progress, while the Large Client goes directly to the Client Networking executable. If you already have the OID/OIB setup working for some other reason (for example, you are using Actuate), then you may want to use the Small Client since the DLL needed by OpenLink is considerably smaller. Otherwise, the headache of setting up the OID/OIB can be avoided by using the Large Client. Both drivers expect the 32 bit ODBC admin program to already be installed. It can be downloaded from the Microsoft FTP site if needed, the file name is WT1250.EXE. After downloading the file, copy it into a separate directory. Click on the Start Button to run the self-extracting EXE. Run the SETUP.EXE that was extracted. It is not necessary to install the entire package. Select the Custom Install and deselect Desktop Drivers, SQL Server, and Oracle unless you want these drivers for some other reason. If the installation is successful, the ODBC admin program will be run automatically. Since the OpenLink Lite driver is not yet installed, simply click on the cancel button. To install the OpenLink Lite 32 Bit Driver, simply run the self installing PROLTE32.EXE. The installation will make sure that the 32 Bit ODBC has been installed and then prompt you for the directory you would like to have the driver installed. After choosing the directory you want and viewing the README file, go to the Control Panel and run the ODBC Admin 32. Make sure this is the Admin 32 since you may have another 16 bit ODBC icon as well. Choose the Add button to create a new ODBC data source. Highlight “OpenLink 32 Bit Lite” and click OK. A box with four tabs will appear. Give the database a name and type in a description if desired. Click on the Progress tab to fill in the database connection parameters. What parameters you will need depends whether you have the Small or Large Client version. For the Small Client, fill in the session options with the OID/OIB connection parameters. This would typically be: `–SV –S oibservice –H host –N TCP` Where *oibservice* is the TCP/IP port number for the OIB, and *host* is the name of the remote server. (you must put in the server’s name as found in your “hosts” file, the actual IP address will not work) Only the TCP protocol is supported. Important Note: *oibservice* is NOT the Progress database port number used in making a client/server connection. You will get an error message from the ODBC driver saying there is a database protocol error if you try to put the Progress database port number here. For the Large Client, simply leave the session parameters blank. Both versions of the driver require the same Database Options information. This is where to put the database connection parameters. They would typically be something like: `-db dbname` OR `-db dbname –H host –S dbservice –N protocol` The first example is for a local database server, and the second example for a remote database. Be careful to consider that the Small Client uses OID/OIB so that the connection parameters must be from the viewpoint of where the OID is connecting from. For example, if the OID is running on machine “B” and the database is /usr/db/database, then you would want to use the Database Options of: -db /usr/db/database If you used parameters like the second options, you would be connecting to the host machine, and then using TCP/IP to loop back to the same machine. This will create a client/server connection and be considerably slower than the self-service option listed above. Also remember that the same thing goes for the Large Client. Since it uses the Progress executable on the local machine, you want to use the same parameters you would give your 4GL client. If the database is local to the PC, then use just the –db parameter, but if the database is remote to the PC, then use –H, -N, and –S. Keep in mind that the database server must be running to make the connection, and the “hosts” and “services” files must contain –H and –S information if client/server mode is being employed. You must also start the OIB if you are running the Small Client Version. Both 32 bit OpenLink drivers require the 32 bit Embedded SQL DLL from Progress Version 8.2 to be installed in the \WINDOWS\SYSTEM directory. (or \WINNT\SYSTEM32 for Windows NT) The OpenLink Web Site erroneously states this file is shipped with their driver. If you have a Progress 32 bit version installed, it can be copied from the DLC “bin” subdirectory. If not, you must obtain this file from Progress or purchase a 32 bit Progress product that contains it. OpenLink ODBC (MT) OpenLink’s Multi Tier drivers consist of four pieces of software, the Universal PC Driver, the Request Broker, the Database Agent, and a registration key. Download a Request Broker based on the server platform. This file will be of the form rqb*.taz. Download a Database Agent for Progress 6 or Progress 7, the file name will be pro6*.taz or pro7*.taz. Finally, fill out the registration form and load a temporary registration key. Copy all of the files to the directory where you want to install, such as /usr/openlink. Run ./install.sh to extract the files for the archives and then run ./bin/oplcfg. This is the OpenLink configuration script that will tell the driver where you version of Progress is installed. Select either option 8 or option 9, depending on version 6 or 7 of Progress. Fill in the full path name of the DLC directory. The only other parameter OpenLink require is the directory name where the OpenLink driver was installed. This will be automatically set by the install script, but if you move the directory, simple select option 1 in oplcfg. Lastly, start up the Request Broker by selecting option number 13. This completes the server install. Download a PC driver based on your client PC operating system. There are basically two choices today, either the Windows 3.1 version or the Windows 95/NT version. Install the driver by running SETUP.EXE and follow the instructions. Run the ODBC setup program to define a data source. You must type in the name of the database, and any other startup parameters. Remember that the Multi Tier driver executes on the server, so that all parameters and file names must be from the server’s viewpoint. Intersolv Intersolv DataDirect ODBC Directions for installing all Intersolv DataDirect series are the same with the exception of the environment variables needed. (Please see the special note for the Version 3.01 Driver) Run the SETUP.EXE program and enter your contact information and serial number. choose which drivers you want to install. Unlike OpenLink’s Lite driver, Intersolv will automatically install the ODBC administration if needed. By default, Intersolv installs all of the drivers in the pack, overwrites older drivers, and creates default Data Sources. Modify these parameters as desired. ***** The version of PROESQL.DLL shipped with Intersolv 2.5 will not work with Progress Version 8.1 and above. You must copy \DLC\BIN\ESQL01.DLL to your Windows directory and rename it as PROESQL.DLL. Since this is a 16 bit DLL, it is not shipped with Version 8.2 but is available on the Progress FTP site. While 8.2 does seem compatible with the Intersolv 2.5 driver, it is unsupported. ***** After all files are copied to the appropriate directories, you must create an ODBC data source. The ODBC icon will now appear in the Windows Control Panel. Run it and select Add from the list of Data Sources. Select Intersolv 2.5 Progress from the list of install drivers. You must now enter 3 sets of parameters, one each for the ODBC information, the OID/OIB information, and the Database Connect Options. The Data Source name is arbitrary but will be seen when you select this database in your ODBC application. Enter a description, and the database name without any directory names. This is important because most ODBC applications will use this parameter as the database alias. Enter an option Username if security is turned on. Next enter the OIB/OID parameters starting with either a Local or Remote location. The Protocol is chosen for you based on the OIB location. Only WIPC is supported for Local connections, and only TCP/IP for Remote. Since Progress Version 8.2 no longer supports WIPC, you must always use TCP/IP with the Intersolv Driver. For standalone Windows V8.2 databases, this is the only server protocol provided. Fill in the service name used to start the OI Broker. (Remember to use the OIB service number, not the Progress database service number.) For local connections, this parameter is not used, but must be filled in anyway. (any name will work) Lastly, fill in the parameters the OID must use to connect to the database. Select either Direct or Via Server for host mode or client/server respectively. Direct connections further require the Database Path and Operating System to be filled in. The path is only the directory name and will be concatenated with the Database Name entered in the ODBC parameter section. Fill in the Protocol, Host and Service Names for client/server connections. If you are using a remote OID/OIB connection, you may not have set up the necessary environment variables. These are the same variables you would need to set up a Progress client session, namely DLC, PROMSGS, and PROCFG. Since you are not executing Progress 4GL programs, PROPATH is not needed. For Windows 3.11 or Windows 95 computers, set the parameters in your AUTOEXEC.BAT. Windows NT requires the parameters to be set in the DOS Console. After the OIB and database servers are started, the driver is ready to use. See the OID/OIB section for further notes. Intersolv Version 3.01/3.10 ***** The Intersolv 3.01 AND 3.10 drivers have been shipped with an alternate variation of the PROSESQL.DLL. This driver is meant to help you separate the version of Progress you use for the driver from the Version you program with. This is accomplished by using a different set of environment variable names. This driver requires you to set IDLC, IPROMSGS, and IPROCFG. Without these variables set, you will most likely receive the message “Unable to find Progress PROMSGS File”. ***** SCO SQL Retriever SQL Retriever is a Multi Tier type driver, similar to OpenLink’s MT, but for Version 7 Progress only. There are two versions, the full version and the “Lite”. SQL Retriever is a part of the whole SCO Vision Windows to Unix connectivity suite. The full product contains file and print sharing facilities such as their “Unix Neighborhood” while the Lite does not. SCO says the only other feature you would lose with the Lite driver is the ability to run over serial lines. Installation is a two step process, once for each PC, once for the server. The PC install is uneventful, just follow the prompts and the defaults should be fine. Setting up an ODBC source though you will see no spot for parameters. SCO apparently believed that you could put them in the database name, which may work for some applications. But for many this will create a database name that the application will not see as legal. (particularly spaces, and dashes) SCO does allow a spot for such parameters, but for some strange reason it is not accessible by default. You will find a Dboptions space in the DSN setup, but greyed out. The solution is to go to the registry and look for HKEY_LOCAL_MACHINE/SOFTWARE/SCO/VWODBC.INI/Progress/Dboptions and set this to yes. Although I have not tested it, SCO Tech Support says that 16 bit installs will have a VWODBC.INI file in the windows directory. In the section with [Progress], enter a line with Dboptions=yes. Other than that, creating the DSN doesn’t seem to be more complicated than setting the host name, Unix path name for the database, and the –H –N and –N parameters for the database connection. On the server, the installation scripts are slightly more complicated. A SCO license manager will be installed that will scan the network for existing servers. Be careful, this will go through your entire hosts file and could take a while deciding each one of your PC’s are not SCO manager hosts. The installation should not be much more difficult than letting it setup the proper services. While SQL Retriever does execute on the server, the server component comes only in very specific Progress V7 minor releases. Unlike OpenLink’s MT driver, there is no way to probuild a new server for you specific platform. SCO has chosen to simplify the setup by shipping only one version of the binaries. So under most circumstances you will need to use a client/server connection due to the differences in shared memory versions among Progress sub-releases. That means that your server must have Progress server networking installed, and the network broker started. Just follow the Progress documentation, create a port number in /etc/services and use –H –S and –N when you use proserve. Progress The Version 6 driver is discontinued and therefore dropped from the FAQ. The new Skywalker ODBC driver needs no special explanations for setup. There is nothing other than starting a server that needs to be done on the database side. On the client side, simple run SETUP.EXE and you’re done. Now that was the way it was meant to be all along. OID/OIB Setup Since the current crop of single tier drivers all use OID/OIB, an explanation is in order. The Open Interface Broker is basically a network program that accepts connections and spawns an Open Interface Driver session. The OID then sends data back to the program at the other end of the connection. OID’s can access both local and remote databases using all of the methods that a normal Progress session can. Before actually starting any servers, it is critical to set up a set of environment variables. You must set PROOIBRK, PROOIDRV, DLC, PROMSGS, PROCFG, and PROSTARTUP. For Unix systems, these need to be set before the OIB is started. That could be either in your .profile/.login or in an OIB startup script. For Win 3.1 and Win 95 systems, put them in your AUTOEXEC. Under Windows NT you will need to go to the Environment tab of the Control Panel’s System Icon. Also be sure to include both the DLC directory, and the DLC/bin directory in your PATH. Unix sample: DLC=/usr/dlc export DLC PATH=$PATH:$DLC:$DLC/bin export PATH PROMSGS=$DLC/promsgs PROCFG=$DLC/progress.cfg PROSTARTUP=$DLC/startup.pf PROOIBRK=$DLC/bin/_prooibk PROOIDRV=$DLC/bin/_prooidv Export PROMSGS PROCFG PROSTARTUP PROOIBRK PROOIDRV Windows 95 AUTOEXEC.BAT Sample: PATH=%PATH%;C:\DLC;C:\DLC\BIN DLC=C:\DLC PROMSGS=C:\DLC\PROMSGS PROCFG=C:\DLC\PROGRESS.CFG PROSTARTUP=C:\DLC\STARTUP.PF PROOIBRK=C:\DLC\BIN\OIBRKR.EXE PROOIDRV=C:\DLC\BIN\OIDRVR.EXE To set up the OIB, first make sure your database has a multi-user server already started. On Windows standalone machines up until Version 8.2, this can be the WIPC server by using the command: proserve C:\DATABASE\DBNAME –N WIPC Edit your TCP/IP services file to set aside a TCP protocol port for the OIB’s use. The actual file location varies from system to system, /etc/services for most Unix variants, \WINDOWS\SERVICES or \WINNT\SYSTEM\DRIVERS\ETC\SERVICES for Microsoft TCP/IP, etc. The port you choose is arbitrary, but it must not be in use by another application. It should look something like: oibservice 6400/TCP Next, actually start the OIB using the name you chose above as the service: oibrkr –SV –S oibservice –N TCP Important Note: oibservice is NOT the Progress database port number used in making a client/server connection. You will get an error message from the ODBC driver saying there is a database protocol error if you try to put the Progress database port number here. The two service number must be unique. **What Works With What** Here is a list of combinations of third party applications, and whether or not they are known to work with ODBC drivers from the various vendors. Please note the list represents the experiences of users trying to run these applications. In some cases, abandoning a project or specific driver may have been considered preferable to spending the time to get the application working. Please do not take a “DOES NOT WORK” designation as the final word, YMMV. Also keep in mind that a designation of “WORKS” does not mean the environment is not without its problems. **ODBC Application Compatibility** <table> <thead> <tr> <th>Application</th> <th>Viability</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>Visual Basic</td> <td>OK</td> <td>Requires “SQL-Passthrough” with the Intersolv Driver for array support. OpenLink supports arrays only in its Lite drivers if a special program is run against the database.</td> </tr> <tr> <td>Visual C++</td> <td>OK</td> <td>Access requires at least one unique index per file, 256 field per file limit. Too many indexes per file also seems to confuse Access. Arrays do not seem to work. The “Group By” check box appears to greatly enhance the performance of reports. YMMV.</td> </tr> <tr> <td>Access</td> <td>OK</td> <td>Requires at least one unique index per file, 256 field per file limit. Too many indexes per file also seems to confuse Access. Arrays do not seem to work. The “Group By” check box appears to greatly enhance the performance of reports. YMMV.</td> </tr> <tr> <td>Excel 95</td> <td>OK</td> <td>Requires 32 bit driver</td> </tr> <tr> <td>Excel 97</td> <td>OK</td> <td>Requires ODBC 3.0 and 32 bit driver (Intersolv Version 3.01 or OpenLink Lite 32/MT 32) ODBC DLL’s Version 3.0.28.22 do not seem to allow re-editing the query. The datasource name must either be “Default”, or you must create a DSN file manually. OpenLink MT Series creates alternate file names when you use the setup.p (for array support) which do not work in the Query Wizard. Manual entry of SQL will resolve the problem.</td> </tr> <tr> <td>Delphi</td> <td>Limited</td> <td>Sporadic reports of it not</td> </tr> </tbody> </table> working but possibly problems with outdated drivers. Delphi 2.0 requires BDE 3.51 or higher. The OpenLink MT broker on the server must be restarted if a Delphi V3 session crashes. <table> <thead> <tr> <th>Software</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>BusinessObjects</td> <td>Appears OK</td> </tr> <tr> <td>Clear Access</td> <td>OK</td> </tr> <tr> <td>ReportSmith 3.0</td> <td>OK</td> </tr> <tr> <td>Crystal Reports</td> <td>OK</td> </tr> <tr> <td></td> <td>Newer versions of Crystal generate SQL-92 which Progress does not support. There is a special DLL available to solve this problem. Intersolv 3.01, and 3.10 seem to require different DLL’s. The situation keeps changing too fast to be described here. See Seagate’s Web Site for current info.</td> </tr> <tr> <td>Impromptu</td> <td>OK</td> </tr> <tr> <td></td> <td>There is a Progress specific bug in the way Version 5 creates SQL. No fix at the time of writing.</td> </tr> <tr> <td>Apptivity</td> <td>OK</td> </tr> <tr> <td></td> <td>Now shipped with OpenLink MT and an ODBC/JDBC bridge. Requires at least one unique index per file. Appears to work with Intersolv ODBC/JDBC bridge as well.</td> </tr> <tr> <td>Actuate 2.0</td> <td>OK</td> </tr> <tr> <td></td> <td>Requires 32 bit driver, unlike native connection does not require OIB started in the same directory as the database.</td> </tr> <tr> <td>Esperant 3.0</td> <td>OK</td> </tr> <tr> <td></td> <td>Will not work with tables that have a hyphen in the name.</td> </tr> </tbody> </table> ## Progress Version Compatibility <table> <thead> <tr> <th>ODBC Driver</th> <th>Version</th> <th>Progress Server Version(s)</th> <th>Progress Client Version(s)</th> </tr> </thead> <tbody> <tr> <td>Intersolv ODBC</td> <td>2.1</td> <td>All V6</td> <td>V6.2, V6.3</td> </tr> <tr> <td></td> <td>2.5</td> <td>V6.3, All V7</td> <td>V7.1, V7.2, V7.3</td> </tr> <tr> <td></td> <td>2.5</td> <td>All V7, All V8</td> <td>V8.0, V8.1, V8.2a(*)</td> </tr> <tr> <td></td> <td>3.01, 3.10</td> <td>All V7, All V8</td> <td>V8.2</td> </tr> <tr> <td>OpenLink Lite</td> <td>Lite</td> <td>V6.3, All V7, All V8</td> <td>All V7, V8.0, V8.1</td> </tr> <tr> <td></td> <td>Lite (32 Bit)</td> <td>All V7, All V8</td> <td>V8.2</td> </tr> <tr> <td>MT</td> <td>All V6, All V7, All V8</td> <td>N/A</td> <td></td> </tr> <tr> <td>Progress</td> <td>V9</td> <td>V9</td> <td></td> </tr> <tr> <td>SCO SQL Retriever</td> <td>4.1</td> <td>V7.3C (ODT &amp; UnixWare) and V7.3A (Sun OS) via shared memory, all other V6 and V7 via Client/Server</td> <td>N/A</td> </tr> <tr> <td>Esker Tun SQL</td> <td>All V6</td> <td>N/A</td> <td></td> </tr> </tbody> </table> * - Unsupported Configuration but does work While many of the above configurations work, not all support shared memory connections for best performance. All environments where the client is one major version of Progress, and the server is another will require a client/server connection. Database Triggers and ODBC ODBC does in fact cause Progress database triggers to fire. It’s very important to understand the architecture to understand where exactly they fire. For drivers that use the OID/OIB, such as Intersolv, and the OpenLink Lite series, they fire inside of the OID. Since the OIB starts the OID, it will inherit both the environment variables and the current directory. This means all triggers will be referenced (as if you set –trig) to the current directory where you were when the OIB was started, and will have a PROPATH the same as was set when the OIB was initiated. Multi-tier drivers use a client agent to retrieve data, and will have the trigger fire inside of the agent. OpenLink uses a “probuild” client in its MT drivers and does not have full functionality. It is important to write Progress code compatible with this cut down client. OpenLink presents a very special situation with triggers because OL renames the first connected database to “PUBLIC”. This would not normally cause any great problem except that this alias will now most likely be different from the .r code produced for your triggers. There are two possible solutions. One is to connect to the database with it aliased as “PUBLIC” (ie. mpro dbname –ld public) and compile a version of your triggers specially for ODBC. Perhaps an easier and more universal solution is to connect a blank database as the first one in your DSN. OpenLink only aliases the first database, and so any subsequent databases will appear with the correct name. Special Notes On ODBC 3.0 MS Query It appears that the latest version of MS Query is giving people quite a bit of a headache due to the new ODBC 3.0. This version of Query expects that the data source is a “File” DSN instead of a User DSN. I have not been able to find official documentation on this but I can did find a quick note in the help file. It seems File DSN’s are meant as a way of sharing the DSN over a network. That does not mean sharing the data over the network, but the connection options, database names, etc. That way you can do the setup on one PC and anytime you need a modification you have only one place that would need to be changed. I have not found the exact contents of what the DSN format is, but it looks something like an older .ini file. For what ever reason they seem to be stored in “\Program Files\Common Files\ODBC”. What I’ve found that seems to work very well is to download a copy of the ODBC SDK from the Microsoft site. (http://www.microsoft.com/data/ODBC/download/SDKDownload.htm) It contains a utility called CONVDSN.EXE. After executing this program once, all User DSN’s will have a corresponding File DSN. In addition, future User DSN’s will also be copied as they are created. Some Miscellaneous Comments Here are some additional tidbits that may help along the way. Intersolv and OpenLink treat the problem of arrays differently. Without any user intervention, Intersolv automatically converts the field into a field name with an underscore and the extent number. OpenLink requires you to run setup.p and then use the output as the tableview file. If you need to switch drivers for some reason, this could be quite a problem unless your tool is generating SQL on the fly. It’s not such a problem to go from Intersolv to OpenLink because you can always modify the .DAT, but the other direction really doesn’t have a quick solution. By all means, add –NL to either the DSN Database Options, or your PROSTARTUP file when using Intersolv. It appears that when using certain functions to look at the schema, a limbo lock is left. Using the –NL option seems to solve this. Be careful in your trigger code. SQL does have a very interesting difference in the way the undefined value (ie ?) is used. Also make sure that your triggers are in the correct path of where they are executing. That’s the OID for Intersolv and OpenLink Lite Small Client, Client Networking for OpenLink Lite Large Client, and the ODBC broker for OpenLink Lite MT, SCO SQL Retriever, or Esker. Progress supplies the 32 bit OIB and OID (oibrkr32.exe and oidrvr32.exe) with Version 8.1, but they forgot files they depend on. Two DLL’s, category.dll and promsgs.dll are missing. Having trouble finding a programming error? Try using ODBC’s Version 3.0 tracing facility. Just go into the ODBC 32 icon in the control panel and click on the Tracing tab. You can then specify a log file to see where and when the SQL calls occur. If that doesn’t help by all means get the ODBC SDK from Microsoft. There are a load of SQL and ODBC test utilities included. There are some cases where it appears the name of the database is too long, and you get something like “SQL MAX OWNER NAME LEN Exceeded Error”. At least one instance of this was reported with Delphi and the BDE. It appears that this is actually a server side environment variable that can be set. A simple: ``` SQL_MAX_OWNER_NAME_LEN=80; export SQL_MAX_OWNDER_NAME_LEN ``` Reportedly fixed the problem. There are other similar style errors that have occurred, and it might be worth while to try setting an environment variable as suggested by the error message. The OID is a sort of headless Progress client, but it has significantly been stripped down. It apparently will only receive the following parameters from the OIB: -h, –RO, -NL, and –yy. If you need any other options, be sure to start the OIB with the PROSTARTUP parameter pointing to a .pf file. See Progress KnowledgeBase entry 18088. **The ODBC DataServer** In addition to connecting third party tools to Progress database, ODBC is also used as a way to connect Progress to third party databases. The Progress ODBC DataServer theoretically allows use of a Progress client against any ODBC compatible data source. In reality, success will depend on the level and quality of ODBC support from both the third party database, and the ODBC driver involved. Progress officially supports Microsoft Access Versions 1.0, 1.1 and 2.0 using Microsoft ODBC Drivers, and Informix On-Line with Intersolv ODBC Version 2.0 or higher and Informix I-Net 4.1 or later. Future directions indicate the ODBC DataServer will become an increasingly important DataServer product. Progress has suggested that future support of DB2, MS SQL Server, and perhaps Sybase may only be through the ODBC DataServer. The Progress web site should be consulted for the correct details: http://www.progress.com/core/progress/odbcds.htm **JDBC and ODBC to JDBC Bridges** If you’re programming in Java, there is a standard similar to ODBC for Java. JDBC drivers are also available for Progress. They are generally of two different types, true JDBC drivers and ODBC to JDBC Bridges. True JDBC involves your Java program making a connection to a database over a URL. ODBC bridges work the same way from the program side, but connect locally through the PC’s ODBC driver. The database can still be remote, just the ODBC must be set up locally. **Closing Note** As has been noted earlier, the new Progress ODBC driver that is coming out with Skywalker has none of the horrible setup problems the other drivers suffer from. Due to the enormous benefits of the easy setup (not to mention the direct SQL access), this FAQ is now moot. While ICS has not specifically endorsed a driver in the past, that is no longer the case. The native connection seems absolutely unbeatable in all categories, and is the only driver we as a company intend to deal with in the coming future. Resource List If you need more information on ODBC and Progress, try the following: **Books** **Progress Manuals** - Inside ODBC by Kyle Geiger (Microsoft Press 1-55615-815-7) - Microsoft ODBC 3.0 Software Development Kit and Programmer’s Reference (Microsoft Press 1-57231-516-4) **Web Sites** - **Intersolv** [http://www.intersolv.com/products/dataconnectivity.htm](http://www.intersolv.com/products/dataconnectivity.htm) - **OpenLink** [http://www.openlinksw.com](http://www.openlinksw.com) - **Microsoft** [http://www.microsoft.com](http://www.microsoft.com) **Acknowledgements** Several people have been kind enough to describe their ODBC experiences. In addition, ODBC is a regular point of discussion on the Progress Email Group. The distilled knowledge from these two sources, along with the resources listed above, form the basis of this document. While a list of contributors could never be complete, but here are some people who have shared their experiences: - George Potemkin ([G.Potemkin@CSBI.spb.su](mailto:G.Potemkin@CSBI.spb.su)) - Clinton Hastings ([chastings@gcl.com.au](mailto:chastings@gcl.com.au)) - Jeff Lischka ([105503.401@compuserver.com](mailto:105503.401@compuserver.com)) - Glen West ([gwest@corp.atl.com](mailto:gwest@corp.atl.com)) - Michael Quattlebaum ([mquattle@ecs-inc.com](mailto:mquattle@ecs-inc.com)) - Murray Hermann ([murray.hermann@star.com.au](mailto:murray.hermann@star.com.au)) - Rune Sandbakken ([rune@Oslo4GW.Norway.NCR.COM](mailto:rune@Oslo4GW.Norway.NCR.COM)) - Pat Caulfield at SCO - Emmon and Paul at OpenLink
{"Source-Url": "http://www.innov8cs.com/downloads/progressodbcfaq.pdf", "len_cl100k_base": 9115, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 40630, "total-output-tokens": 9854, "length": "2e13", "weborganizer": {"__label__adult": 0.0003261566162109375, "__label__art_design": 0.0003159046173095703, "__label__crime_law": 0.0001882314682006836, "__label__education_jobs": 0.00061798095703125, "__label__entertainment": 9.745359420776369e-05, "__label__fashion_beauty": 0.00011652708053588869, "__label__finance_business": 0.00048279762268066406, "__label__food_dining": 0.0002123117446899414, "__label__games": 0.000823974609375, "__label__hardware": 0.0053863525390625, "__label__health": 0.0002015829086303711, "__label__history": 0.0001931190490722656, "__label__home_hobbies": 0.0001252889633178711, "__label__industrial": 0.000606536865234375, "__label__literature": 0.0001857280731201172, "__label__politics": 0.0001093745231628418, "__label__religion": 0.0004038810729980469, "__label__science_tech": 0.024566650390625, "__label__social_life": 5.59687614440918e-05, "__label__software": 0.1348876953125, "__label__software_dev": 0.82958984375, "__label__sports_fitness": 0.00016748905181884766, "__label__transportation": 0.0004584789276123047, "__label__travel": 0.0001443624496459961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39099, 0.02265]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39099, 0.17213]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39099, 0.87461]], "google_gemma-3-12b-it_contains_pii": [[0, 169, false], [169, 2575, null], [2575, 3949, null], [3949, 5084, null], [5084, 7142, null], [7142, 9718, null], [9718, 12079, null], [12079, 14645, null], [14645, 15245, null], [15245, 17976, null], [17976, 20801, null], [20801, 23286, null], [23286, 24720, null], [24720, 26732, null], [26732, 28070, null], [28070, 29546, null], [29546, 32404, null], [32404, 35058, null], [35058, 37059, null], [37059, 39099, null]], "google_gemma-3-12b-it_is_public_document": [[0, 169, true], [169, 2575, null], [2575, 3949, null], [3949, 5084, null], [5084, 7142, null], [7142, 9718, null], [9718, 12079, null], [12079, 14645, null], [14645, 15245, null], [15245, 17976, null], [17976, 20801, null], [20801, 23286, null], [23286, 24720, null], [24720, 26732, null], [26732, 28070, null], [28070, 29546, null], [29546, 32404, null], [32404, 35058, null], [35058, 37059, null], [37059, 39099, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39099, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39099, null]], "pdf_page_numbers": [[0, 169, 1], [169, 2575, 2], [2575, 3949, 3], [3949, 5084, 4], [5084, 7142, 5], [7142, 9718, 6], [9718, 12079, 7], [12079, 14645, 8], [14645, 15245, 9], [15245, 17976, 10], [17976, 20801, 11], [20801, 23286, 12], [23286, 24720, 13], [24720, 26732, 14], [26732, 28070, 15], [28070, 29546, 16], [29546, 32404, 17], [32404, 35058, 18], [35058, 37059, 19], [37059, 39099, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39099, 0.13308]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
e8edf43f4e059ca5ddf62da972f012c3ba082535
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-34321-6_17.pdf", "len_cl100k_base": 10933, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 50780, "total-output-tokens": 13514, "length": "2e13", "weborganizer": {"__label__adult": 0.0002741813659667969, "__label__art_design": 0.0004453659057617187, "__label__crime_law": 0.00035071372985839844, "__label__education_jobs": 0.0008473396301269531, "__label__entertainment": 7.861852645874023e-05, "__label__fashion_beauty": 0.00014591217041015625, "__label__finance_business": 0.0005788803100585938, "__label__food_dining": 0.0002846717834472656, "__label__games": 0.0004625320434570313, "__label__hardware": 0.0008168220520019531, "__label__health": 0.0004322528839111328, "__label__history": 0.00026297569274902344, "__label__home_hobbies": 8.803606033325195e-05, "__label__industrial": 0.0004625320434570313, "__label__literature": 0.00026035308837890625, "__label__politics": 0.000286102294921875, "__label__religion": 0.0003323554992675781, "__label__science_tech": 0.058319091796875, "__label__social_life": 8.845329284667969e-05, "__label__software": 0.0146942138671875, "__label__software_dev": 0.91943359375, "__label__sports_fitness": 0.0001932382583618164, "__label__transportation": 0.0004992485046386719, "__label__travel": 0.00018870830535888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48706, 0.0592]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48706, 0.32256]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48706, 0.85003]], "google_gemma-3-12b-it_contains_pii": [[0, 2322, false], [2322, 5594, null], [5594, 6756, null], [6756, 9862, null], [9862, 13095, null], [13095, 16302, null], [16302, 19488, null], [19488, 23284, null], [23284, 27513, null], [27513, 29735, null], [29735, 32997, null], [32997, 36572, null], [36572, 39203, null], [39203, 42343, null], [42343, 45327, null], [45327, 48706, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2322, true], [2322, 5594, null], [5594, 6756, null], [6756, 9862, null], [9862, 13095, null], [13095, 16302, null], [16302, 19488, null], [19488, 23284, null], [23284, 27513, null], [27513, 29735, null], [29735, 32997, null], [32997, 36572, null], [36572, 39203, null], [39203, 42343, null], [42343, 45327, null], [45327, 48706, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48706, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48706, null]], "pdf_page_numbers": [[0, 2322, 1], [2322, 5594, 2], [5594, 6756, 3], [6756, 9862, 4], [9862, 13095, 5], [13095, 16302, 6], [16302, 19488, 7], [19488, 23284, 8], [23284, 27513, 9], [27513, 29735, 10], [29735, 32997, 11], [32997, 36572, 12], [36572, 39203, 13], [39203, 42343, 14], [42343, 45327, 15], [45327, 48706, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48706, 0.10204]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
03cf3e99d697c0c70bed59e159aee1febf3c2b96
[REMOVED]
{"Source-Url": "https://pure.tue.nl/ws/files/1912794/Metis215021.pdf", "len_cl100k_base": 9274, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 47177, "total-output-tokens": 11598, "length": "2e13", "weborganizer": {"__label__adult": 0.00045371055603027344, "__label__art_design": 0.00048661231994628906, "__label__crime_law": 0.0006442070007324219, "__label__education_jobs": 0.0013418197631835938, "__label__entertainment": 0.00011807680130004884, "__label__fashion_beauty": 0.0002340078353881836, "__label__finance_business": 0.0005173683166503906, "__label__food_dining": 0.0005040168762207031, "__label__games": 0.0010004043579101562, "__label__hardware": 0.001285552978515625, "__label__health": 0.0011243820190429688, "__label__history": 0.00044417381286621094, "__label__home_hobbies": 0.00015497207641601562, "__label__industrial": 0.0008373260498046875, "__label__literature": 0.0004172325134277344, "__label__politics": 0.0005574226379394531, "__label__religion": 0.0006833076477050781, "__label__science_tech": 0.177001953125, "__label__social_life": 0.00014495849609375, "__label__software": 0.00750732421875, "__label__software_dev": 0.802734375, "__label__sports_fitness": 0.0004973411560058594, "__label__transportation": 0.00106048583984375, "__label__travel": 0.0002701282501220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41495, 0.03306]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41495, 0.19938]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41495, 0.8597]], "google_gemma-3-12b-it_contains_pii": [[0, 2282, false], [2282, 5121, null], [5121, 7801, null], [7801, 10889, null], [10889, 12925, null], [12925, 15401, null], [15401, 18092, null], [18092, 20562, null], [20562, 22559, null], [22559, 25627, null], [25627, 26802, null], [26802, 29893, null], [29893, 32637, null], [32637, 35317, null], [35317, 38291, null], [38291, 41495, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2282, true], [2282, 5121, null], [5121, 7801, null], [7801, 10889, null], [10889, 12925, null], [12925, 15401, null], [15401, 18092, null], [18092, 20562, null], [20562, 22559, null], [22559, 25627, null], [25627, 26802, null], [26802, 29893, null], [29893, 32637, null], [32637, 35317, null], [35317, 38291, null], [38291, 41495, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41495, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41495, null]], "pdf_page_numbers": [[0, 2282, 1], [2282, 5121, 2], [5121, 7801, 3], [7801, 10889, 4], [10889, 12925, 5], [12925, 15401, 6], [15401, 18092, 7], [18092, 20562, 8], [20562, 22559, 9], [22559, 25627, 10], [25627, 26802, 11], [26802, 29893, 12], [29893, 32637, 13], [32637, 35317, 14], [35317, 38291, 15], [38291, 41495, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41495, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
1da9cb3fda32329594bff245245b02c569941166
Where impact got Going Michael Tiller\textsuperscript{1} Dietmar Winkler\textsuperscript{2} \textsuperscript{1}Xogeny Inc., USA, michael.tiller@xogeny.com \textsuperscript{2}Telemark University College, Norway, dietmar.winkler@hit.no Abstract This paper discusses the impact package manager. The primary goal of this project is to support the development of a healthy eco-system around Modelica. For many other languages, the existence of an easy to use package manager has made it easier for people to explore and adopt those languages. We seek to bring that same kind of capability to the Modelica community by incorporating useful features from other package managers like bower, npm, etc. This paper is an update on the status of the impact package manager which was discussed previously in (Tiller and Winkler 2014). This latest version of impact involves a complete rewrite that incorporates a more advanced dependency resolution algorithm. That dependency resolution will be discussed in depth along with many of the subtle issues that arose during the development of this latest version of impact. Along with a superior dependency resolution scheme, the new version of impact is much easier to install and use. Furthermore, it includes many useful new features as well. Keywords: Modelica, package management, GitHub, dependency resolution, golang 1 Introduction 1.1 Motivation The motivation behind the impact project is to support two critical aspects of library development. The first is to make it very easy for library developers to publish their work. The second is, at the same time, to make it easy for library consumers to both find and install published libraries. We also feel it is important to reinforce best practices with respect to model development. For this reason, we have made version control an integral part of our solution. Rather than putting users in a position to have to figure out how to make impact work with a version control system, we’ve built impact around the version control system. Not only do users not have to find a way to make these technologies work together, impact actually nudges those not using version control toward solutions that incorporate version control. In this way, we hope to demonstrate to people the advantages of both impact and version control and establish both as “best practices” for model development. By creating a tool that makes it easy to both publish and install libraries, we feel we are creating a critical piece of the foundation necessary to establish a healthy ecosystem for model development. 1.2 History Earlier, we mentioned that impact has been completely rewritten. In fact, the very first version of impact was just a single Python script for indexing and installing Modelica code (Tiller 2013). It eventually evolved into a multi-file package that could be installed using the Python package management tools. 2 Requirements After building the original Python version, we gave some thought to what worked well and what didn’t work well. One issue we ran into immediately was the complexity of installing the Python version of impact. Python is unusual in that it has two package managers, easy_install and pip. It comes with easy_install, but pip is the more capable package manager. So in order for someone to install impact, they first needed to install Python, then install pip and then install impact. This was far too complicated. So we wanted to come up with a way for people to install impact \textbf{as a simple executable} without any run-time or prerequisites. Another issue we ran into with the Python version was the fact that there are two different and incompatible versions of Python being used today (\textit{i.e.}, 2.x and 3.x). Trying to support both was an unnecessarily inefficient use of resources. We also had some difficulties in the Python version with support for SSL under Windows (StackOverflow 2010). Because we were doing lots of "crawling" (more on this shortly), we needed a platform that provided \textbf{solid HTTP client support}. For these reasons, we felt we needed to move away from Python altogether. Although most Modelica users run their development tools and simulations under Windows, there are several tools that support OSX and Linux as well as Windows. So as to not neglect users of those tools and to support more cross-platform options, we also wanted to be able to compile impact for all three major platforms. Furthermore, we wanted to provide a simple executable for all platforms without having to have actual development machines for each of these different platforms. For this reason, cross compilation between different platforms was an important consideration as well. Of course, we also wanted to have good performance. For most package management related functions, the speed of the internet connection is probably the biggest limiting factor. So CPU performance wasn’t that high on the list. But, as we shall discuss shortly, the computational complexity of the dependency resolution algorithm we implemented could lead to some computationally intensive calculations for complex systems of dependencies. For these reasons, we ultimately rewrote impact in Go (Go-Developers 2014). Go is a relatively new language from Google that stresses simplicity in language semantics but, at the same time, provides a fairly complete standard library. You can think of Go as being quite similar to C with support for extremely simple object-oriented functionality, automatic garbage collection and language level support for CSP-based concurrency. With Go, we were able to satisfy all the requirements above. ### 3 Version Numbering Before we dive into all the details associated with crawling, indexing, resolving and installing, it is useful to take a moment to briefly discuss versioning. Modelica supports the notion of versions through the use of the version and uses annotations. These two annotations allow libraries to explicitly state what version they are and what versions of other libraries they use, respectively. But there is one complication to the way Modelica deals with versions. In Modelica, a version is simply a string. This by itself isn’t a problem. But it becomes a problem, as we will discuss in greater detail shortly, when you need to understand relationships between versions. In particular, there are two important things we would like to determine when dealing with version numbers. The first is an unambiguous ordering of versions. In other words, which, of any two versions, is the “latest” version? The second is whether a newer version of a library is “backwards compatible” with a previous version. These are essential questions when trying to resolve dependencies and the current string based approach to versions in Modelica is not semantically rich enough to help us answer either of these. This issue is not unique to the Modelica world. These same questions have been asked for a very long time and various approaches have been invented to deal with answering these questions. One recent and widely used approach is to employ what is called semantic versioning (Preston-Werner 2014). Semantic versioning is pretty much what it sounds like, an approach to defining version numbers where the version numbers have very explicit meanings associated with them. A very simple summary of semantic versioning would be that all versions have exactly three numerical components, a major version number, a minor version number and a patch. A semantic version must have all of these numbers and they must be .-separated. For this reason, the following versions are not legal semantic version numbers: 1, 1a, 1.0, 1.0-beta5, 4.0.2.4096. Each of the three numbers in a semantic version means something. If you make a non-backward compatible change, you must increment the major version. If you make a backward compatible version, you must increment the minor version. If you make a change that should be completely compatible with the previous version (e.g., doesn’t add any new capability), you increment only the patch version. There are additional provisions in semantic versioning to handle pre-release versions as well as build annotations. We will not discuss those semantics here, but they are incorporated into our implementation’s treatment of version numbers. Our use of semantic versioning is aligned with our goal of strongly encouraging best practices. It is important to point out that the use of semantic versions is completely legal in Modelica. In other words, Modelica allows a wider range of interpretations of version numbers. By using semantic versions, we narrow these interpretations but we feel that this narrowing is much better for the developer since it also provides meaning to the version numbers assigned to a library. However, because Modelica libraries are free to use nearly any string as a version number, we need to find a way to “bridge the gap” between past usage and the usage we are encouraging moving forward. Although internally impact understands only semantic versions, it is still able to work with nearly all existing Modelica libraries. This is achieved through a process of “normalizing” existing versions. When impact comes across versions that are not legal semantic versions, it attempts to create an equivalent semantic version representation. For example, a library with a version string of 1.0 would be represented by the semantic version 1.0.0. For this normalization to work, it is important to make sure that the normalization is performed both on the version number associated with a library and on the version numbers of the libraries used. In other words, it must be applied consistently to both the version and uses annotations. 4 Indexing As mentioned previously, there are two main functions that impact performs. The first is making it easy for library developers to publish their libraries and the other is making it easy for consumers to find and install those same libraries. Where these two needs meet is the library index. The index is built by collecting information about published libraries. The same index is used by consumers searching for information about available libraries. Building the index involves crawling through repositories and extracting information about libraries that those repositories contain. In the following section we will discuss this crawling process in detail and describe the information that is collected and published in the resulting index. 4.1 Sources Currently, impact only supports crawling GitHub (GitHub 2014) repositories. It does this by using the GitHub API (GitHub-Developers 2014) to search through repositories associated with particular users and to look for Modelica libraries stored in those repositories. We will shortly discuss exactly how it identifies Modelica libraries. But before we cover those details it is first necessary to understand which versions of the repository it looks into. Each change in a Git repository involves a commit. That commit affects the contents of one or more files in the repository. During development, there are frequent commits. To identify specific versions of the repository, a tag can be associated with that version. Each tag in the repository history that starts with a version number, the following is a list of "obvious" places that conform to the repository structure patterns discussed previously. 4.2 Repository Structure For each version of a repository tagged with a semantic version number, impact inspects the contents of that version of the repository looking for Modelica libraries. There are effectively two ways that impact finds Modelica libraries in a repository. The first is to check for libraries in “obvious” places that conform to some common conventions. For cases where such conventions are insufficient, impact looks for a file named impact.json to explicitly provide information about the repository. 4.2.1 Conventions With respect to impact, the following is a list of “obvious” places that impact checks for the presence of Modelica libraries: - ./package.mo or ./<dirname>/package.mo The directory <dirname> is presumed to be a Modelica package. - ./<filename>.mo or ./<filename> <ver>.mo The file ./<filename>.mo is a file containing a Modelica library. In all cases, the name of the library is determined by parsing the actual Modelica package definition and is not related to the name of the repository. As can be seen from these conventions, only files and directories that exist at the root level are checked for Modelica content. 4.2.2 impact.json For various reasons, library developers may not wish to conform to the repository structure patterns discussed previously. Furthermore, there may be additional information they wish to include about their libraries. For this reason, a library developer can include an impact.json file in the root of the repository directory that provides additional information about the contents of the repository. For example, a repository may contain two or more Modelica libraries in subdirectories. The impact.json file allows information about the storage location of each library in the repository to be provided by the library developer. Furthermore, the author may wish to include contact information beyond what can be extracted from information about the repository and its owner. These are just a few use cases for why an impact.json file might be useful for library developers. A complete schema for the impact.json file can be found later in Section 4.4.2. 4.3 Handling Forks The Modelica specification implicitly assumes that each library is uniquely identified by its name. This name is used in both the version and uses annotations as well as any references in Modelica code (e.g., Modelica in Modelica.SIunits). This assumption works well when discussing libraries currently loaded into a given tool. But when you expand the scope of your “namespace” to include all libraries available from multiple sources, the chance for overlap becomes possible and must be dealt with. Previously, we mentioned the importance of supporting best practices in model development and the specific need to accommodate version control as part of that process. Up until now, we have leveraged version control to make the process of indexing and collection libraries easier. However, version control does introduce one complexity as well. That complexity is how to deal with forks. Forks are common in open source projects and typically occur when there are multiple perspectives on how development should progress on a given project. In some cases, rather than reconciling these different perspectives, developers decide to proceed in different directions. When this happens, the project becomes “forked” and there are then (at least) two different libraries being developed in parallel. Each of these libraries may share a common name and perhaps even the same version numbers but still be fundamentally different libraries. A fork can arise for another, more positive, reason. When someone improves a library they may not have permission to simply fold their improvement back into the original library. On GitHub in particular, it is extremely common for a library to be forked simply to enable a third-party to make an improvement. The author of the improvement then sends what is called a pull request to the library author asking them to incorporate the improvement. In such a workflow, the fork is simply a temporary measure (akin to a branch) to support concurrent development. Once the pull request is accepted, the fork can be removed entirely. Regardless of why the fork occurs, it is important that impact accommodates cases where forking occurs. This is because forking is a very common occurrence in a healthy eco-system. It indicates progress and interest and we should not do anything to stifle either of these. The issue with forking is that the same name might be used by multiple libraries. In such cases, we need a better way to uniquely identify libraries. For this reason, impact records not only the library name, but also a URI associated with each library. In this way, the URI serves as a completely unambiguous way of identifying different libraries. While two forks may have the same name, they will never have the same URI. 4.4 Schema We’ve mentioned the kinds of information impact collects while indexing as well as the kind of information that might be provided by library developers (via impact.json files). In this section, we will provide a complete description of information used by impact. 4.4.1 impact_index.json As part of the indexing process, impact produces an index file named impact_index.json. This is a JSON encoded representation of all the libraries found during indexing. The root of an impact_index.json file contains only two elements: version A string indicating what version of impact generated the index. The string is, of course, a semantic version. libraries The libraries field is an array. Each element in the array describes a library that was found. The order of the elements is significant. Libraries that occur earlier in the list take precedence over libraries that appear later. This is important in cases where libraries have the same name. For each library in the libraries array, the following information may be present: ame The name of the library (as used in Modelica) description A textual description of the library stars A way of “rating” libraries. In the case of GitHub, this is the number of times the repository has been starred. But for other types of sources, other metrics can be used. uri A URI to uniquely identify the given library (when it shares a common name with another library) owner_uri A URI to uniquely identify the owner of the library email The email address of the owner/maintainer of the library homepage The URL for the library’s homepage repository The URI for the library’s source code repository format The format of the library’s source code repository (e.g., Git, Mercurial, Subversion) versions This is an object that maps a semantic version (in the form of a string) to details associated with that specific version The details associated with each version are as follows: version A string representation of the semantic version (i.e., one that is identical to the key). tarball_url A URL that points to an archive of the repository in tar format. zipball_url A URL that points to an archive of the repository in zip format. path The location of the library within the repository. isfile Whether the Modelica library is stored as a file (true) or as a directory (false) sha This is a hash associated with this particular version. This is currently recorded by impact during indexing but not used. Such a hash could be useful for caching repository information locally. dependencies This is an array listing the dependencies that this particular version has on other Modelica libraries. Each element in this array is an object with one field, name, giving the name of the required library and another field, version, which is the semantic version of that library represented as a string (see previous discussion on normalization in 3). 4.4.2 impact.json As mentioned previously in Section 4.2.2, each directory can include a file named impact.json that provides explicit information about Modelica libraries contained... in that repository. The root of the impact.json file contains the following information: - **owner_uri**: A link to information about the libraries owner - **email**: The email address of the owner or maintainer - **alias**: An object whose keys are the names of libraries and whose associated values are the unique URIs of those libraries. This information can, therefore, be used to disambiguate between dependencies where there may be multiple libraries with that name. - **libraries**: This is an array where each element is an object that contains information about a library present in the repository. For each library listed in the libraries field, the following information may be provided: - **name**: The name of the library - **path**: The path to the library - **isfile**: Whether the entity pointed to by path is a Modelica library stored as a file (true) or as a directory (false). - **issues_url**: A link pointing to the issue tracker for this library - **dependencies**: An explicit list of dependencies for this library (if not provided, the list will be based on the uses annotations found in the package definition). Each dependency in the list should be an object that provides the following information: - **name**: Name of the required library - **uri**: Unique URI of the required library - **version**: Semantic version number of the required library (represented as a string) 5 Installation The previous section focused on how impact collects information about available libraries. The main application for this information is to support installation of those libraries. In this section, we’ll discuss the installation side of using impact. 5.1 Dependency Resolution 5.1.1 Background To understand the abstract problem behind the concept of a dependency, we refer to the formal study undertaken in (Boender 2011). There, a repository is defined as a triple \((R, D, C)\) of a set of packages \(R\), a dependency function \(D : R \rightarrow \mathcal{P}(\mathcal{P}(R))\), and a conflict relation \(C \subseteq R \times R\). At that level, version numbers have been abstracted to (distinguishable) packages: Every version yields a distinctive package \(p \in P\). The dependency function \(D\) maps a package \(p\) to sets of sets of packages \(d \in D(p)\), where each set represents a way to provide one required feature of \(p\). In other words: If for each \(d \in D(p)\) at least one package in \(d\) is installed, it is possible to use \(p\). Currently, there is no way to express conflicts directly in a Modelica package. However, due to the existence of external libraries (which could conflict in arbitrary ways), it is likely that such a need will arise in the future. Additionally, current Modelica makes it impossible to refer to two different versions of a library from the same model. Hence, we consider different versions of the same package conflicting. The dependency resolution of impact fits into Boender’s model. Therefore, the conclusions drawn in (Boender 2011) can be applied to impact as well: The set of packages impact installs for a given project needs to fulfill two properties, Boender calls abundance and peace. Informally, abundance captures the requirement that all dependencies be met while peace avoids packages that are in conflict with each other. A set of packages that is peaceful and abundant is called healthy and a package \(p\) is called installable w.r.t a given repository if and only if there exists a healthy set \(I\) in said repository such that \(p \in I\). The problem of finding such an installable set is however a hard one. In fact, Boender proves by a simple isomorphism between the boolean satisfiability problem and the dependency resolution that finding such a set is NP-hard. Fortunately, for the current typical problem size, this isn’t really an issue. 5.1.2 Resolution Algorithm The indexing process collects quite a bit of information about available libraries. Most of the complexity in implementing the installation functionality in impact is in figuring out what to install. And most of that complexity is in finding a set of versions for the required libraries that satisfy all the dependency relations. This process is called dependency resolution. The resolution algorithm starts with a list of libraries that the user wants to install. In some cases, this may be a single library but, in general, the list can be of any length. For each library in the list, the user may specify a particular version of the library they wish to install, but this isn’t mandatory. One important point here is that we refer to this as a list, not a set. Order is significant here. The libraries that appear first are given a higher priority than those that appear later. Let’s explain why this priority is important. Consider a user who wishes to install libraries \(A\) and \(B\). If the user has not explicitly specified what version of each library they are interested in, impact assumes the user wants the latest version, if possible. But what if the latest version of both cannot be used? To understand this case, consider the following constraints: A:1.0.0 uses B:2.0.0 A:2.0.0 uses B:1.0.0 where A:1.0.0 means version 1.0.0 of library A. This example is admittedly contrived, but the underlying issue is not. We can see here that if we want the latest version of A, we cannot also use the latest version of B (and vice versa) while still honoring the constraints above. The ordering of the libraries determines how we “break the tie” here. Since A appears first, we assume it is more important to have the latest version of A than to have the latest version of B. Let’s take this extremely simple example to outline how the resolution algorithm would function in this case. In later sections, we’ll introduce additional complexities that must be dealt with. If a user asks for libraries A and B to be installed, the question that the dependency algorithm has to answer is **which versions** do we use. Assuming that each library has a version 1.0.0 and 2.0.0, then each “variable” in this problem has two possible values. The following table essentially summarizes the possibilities: <table> <thead> <tr> <th>Version of A</th> <th>Version of B</th> </tr> </thead> <tbody> <tr> <td>1.0.0</td> <td>1.0.0</td> </tr> <tr> <td>2.0.0</td> <td>1.0.0</td> </tr> <tr> <td>2.0.0</td> <td>2.0.0</td> </tr> </tbody> </table> This is a simple enumeration of the possibilities. But remember, we assume the user wants the most recent version and we assume A is more important than B. Semantic versioning provides us with a basis for determining which version is more recent. Given these we reorder these combinations so that the most desirable combinations appear first and the least desirable appear last: <table> <thead> <tr> <th>Version of A</th> <th>Version of B</th> </tr> </thead> <tbody> <tr> <td>2.0.0</td> <td>2.0.0</td> </tr> <tr> <td>2.0.0</td> <td>1.0.0</td> </tr> <tr> <td>1.0.0</td> <td>2.0.0</td> </tr> <tr> <td>1.0.0</td> <td>1.0.0</td> </tr> </tbody> </table> Now we see the impact of the dependency constraints. Specifically, the first (most desirable) combination in this table does not satisfy the dependency constraints (i.e., A:2.0.0 does not work with B:2.0.0). If we eliminate rows that violate our dependency constraints, we are left with: <table> <thead> <tr> <th>Version of A</th> <th>Version of B</th> </tr> </thead> <tbody> <tr> <td>2.0.0</td> <td>1.0.0</td> </tr> <tr> <td>1.0.0</td> <td>2.0.0</td> </tr> </tbody> </table> In summary, we order the combinations by their desirability (considering both the relative priority of the libraries and their version numbers) and then we eliminate combinations that don’t satisfy our dependency constraints. This gives an overview of how the algorithm works conceptually. But, as you may have guessed, the problem is not quite this simple. Consider now a slightly more complex case with the following dependencies: 1. A:3.0.0 uses B:1.2.0 2. A:3.0.0 uses C:1.1.0 3. B:1.2.0 uses C:1.1.0 4. A:2.0.0 uses B:1.1.0 5. A:2.0.0 uses C:1.0.0 6. B:1.1.0 uses C:1.0.0 7. A:1.0.0 uses B:1.0.0 8. A:1.0.0 uses C:1.0.0 9. B:1.0.0 uses C:1.0.0 Now we have three variables we need to solve for, A, B and C. For each variable, we have three possible values. As we’ve already described, newer versions are preferred over older versions while searching. This means that the first combination we will consider will be... A:3.0.0, B:1.2.0 and C:1.2.0 ...and the last combination we will consider will be... A:1.0.0, B:1.2.0 and C:1.2.0 There are several interesting things to notice about this case. First, although the problem is not particularly large (3 libraries with 3 versions each), the number of combinations to check is significant (i.e., 3·3·3 = 27). Of these 27 combinations, only the last one to be considered (i.e., the least desirable) satisfies the dependency constraints. There is nothing we can really do about the fact that the oldest version of each of these libraries must be used (this is dictated by the dependencies themselves and has nothing to do with the algorithm). But the complication is that we must consider all of them (in this contrived case) before finding the one we want. In reality, we would not actually enumerate all possibilities *a priori*. Instead, we would simply consider each “variable” one at a time and loop over all possible versions. If, at any point, we find a conflict with our constraints, we simply break out of the inner most loop. This is referred to as **backtracking**. In Modelica pseudo-code, the algorithm (for this specific case) might look like this: ```pseudo-code for A in ["3.0.0", "2.0.0", "1.0.0"] loop for B in ["1.2.0", "1.1.0", "1.0.0"] loop if not are_compatible(A,B) then break; end if; for C in ["1.2.0", "1.1.0", "1.0.0"] loop if not are_compatible(B,C) then break; end if; if not are_compatible(A,C) then break; end if; //If we get here, we have a solution end for; end for; end for; end if; ``` 10.3384/ecp15118725 Using this backtracking, we can more efficiently traverse the possibilities by eliminating lots of cases that we know are a dead end (especially in larger problems). Any search based on backtracking is vulnerable to poor performance under certain (typically pathological) conditions. We’ll return to this point later when we talk about performance of our current implementation. There is one last complication we must deal with when resolving dependencies. Consider the following simple set of dependencies: 1. A:2.0.0 uses B:1.2.0 2. A:2.0.0 uses C:1.1.0 3. B:1.2.0 uses C:1.2.0 4. A:1.0.0 uses B:1.1.0 or B:1.0.0 (i.e., A can use B:1.1.0 or B:1.0.0) 5. A:1.0.0 uses D:1.1.0 6. B:1.0.0 uses C:1.1.0 7. C:1.2.0 uses D:1.0.0 8. B:1.1.0 uses C:1.2.0 We can also represent this set of dependencies graphically as shown in Figure 1. Graphically, we have a box to represent each library and that box contains the different versions available. These versions are connected by the constraints shown in the table above. Figure 1. Graphical representation of package dependencies Given these dependencies and the fact that the user wishes to install both A and B, what are the variables in our dependency resolution algorithm? Obviously, we must consider all the versions of both A and B (i.e., we must pick a version from the box for library A and B in Figure 1). But what about C and D? It makes no sense to enumerate all combinations of versions for these four libraries because in many cases D isn’t even required. Furthermore, what is their relatively priority (i.e., if a choice is required, is it more important to have the latest version of C or D?) When resolving dependencies, we only introduce new libraries when necessary (i.e., if they are needed by our current choices of existing libraries) and their relative priority is determined by the relative priority of the library that introduced them. To understand how the resolution works in this case, first consider the case of A:2.0.0. This version cannot be chosen. This is because A:2.0.0 wants C:1.0.0 while B:1.2.0 wants C:1.1.0. So no choice for C is valid. Furthermore, we don’t even consider D because it isn’t required in any of these cases. Now if we move to the case of A:1.0.0, things are more complicated. Now we do need to consider both D and C. However, note that because A:1.0.0 depends directly on D, we consider D more important. This is important because when considering A:1.0.0 we have two versions of B that are compatible (i.e., B:1.1.0 and B:1.0.0). Given that we are considering A:1.0.0 and we’ve already ruled out B:1.2.0, we are left with the following combinations: <table> <thead> <tr> <th>Version of B</th> <th>Version of D</th> <th>Version of C</th> </tr> </thead> <tbody> <tr> <td>1.1.0</td> <td>1.1.0</td> <td>1.2.0</td> </tr> <tr> <td>1.1.0</td> <td>1.1.0</td> <td>1.1.0</td> </tr> <tr> <td>1.1.0</td> <td>1.0.0</td> <td>1.2.0</td> </tr> <tr> <td>1.1.0</td> <td>1.0.0</td> <td>1.1.0</td> </tr> <tr> <td>1.0.0</td> <td>1.1.0</td> <td>1.2.0</td> </tr> <tr> <td>1.0.0</td> <td>1.0.0</td> <td>1.1.0</td> </tr> <tr> <td>1.0.0</td> <td>1.0.0</td> <td>1.2.0</td> </tr> <tr> <td>1.0.0</td> <td>1.0.0</td> <td>1.1.0</td> </tr> </tbody> </table> Notice the ordering of the columns? Since the user originally asked for both A and B, B comes first. But when it comes to C and D, having a more recent version of D is more important than having a more recent version of C. As mentioned previously, we don’t construct every combination. Furthermore, we don’t always consider all libraries. The best way to understand how this search proceeds is to enumerate the partial combinations that our search generates and the point at which backtracking occurs. In such a case, we can think of the search as proceeding as follows: See next page for details. 5.1.3 Formulating Constraints The default assumption is that dependencies will come from the uses annotation in Modelica. There is a proposal to extend the uses annotation to allow multiple compatible versions to be listed (vs. only a single compatible version today). As mentioned previously, such an or relationship is already supported by impact. So this change would not impact the resolution algorithm used by impact. Although it hasn’t yet been implemented, one proposed fallback mode for impact is to ignore the explicit dependencies contained in Modelica code and instead rely on the dependency relationships implicit in semantic versions. In other words, if a library $B$ has two versions, 1.1.1 and 1.1.2, and those versions strictly follow semantic versioning conventions, then we know that any library that depends on $B:1.1.1$ must also be compatible with $B:1.1.2$. Such a fallback mode could be employed when impact is unable to find a solution using explicit constraints. 6 Go Implementation We’ve created an implementation of impact using Go. This implementation includes different sub-packages for dealing with crawling repositories, resolving dependencies, parsing Modelica code and managing configuration settings. It also contains a sub-package for implementing the command-line tool and all of its sub-commands. This structure means that impact is not only a command-line tool, but also a Go library that can be embedded in other tools. The Go implementation includes the following commands: - **search** Search library names and descriptions for search terms. - **install** Install one or more libraries and their dependencies. - **index** Build an index of repositories. - **version** Print out version and configuration information. For each command, you can use the -h switch to find out more about the command and its options. Earlier we described our requirements. The main reason we moved to Go from Python was Go’s support for cross-compiling between all major platforms and the fact that it generates a statically linked binary that doesn’t depend on any runtime. The Go compiler includes a complete implementation of HTTP for both the client and server. In fact, the standard library for Go is fairly complete. At the moment, the only third party dependencies for impact are a Go implementation of the GitHub v3 API and an implementation of semantic versioning. The performance of compiled Go code is quite good. In Section 5.1.2 we described how the algorithm we are using could, in a worst case scenario, search every potential combination before finding either a solution or failing. We constructed several test cases with $n$ variables where each variable had 2 possible values. The result is that there will be $2^n$ possible combinations. These cases were contrived so that the least desirable combination was the only one that would satisfy the dependency constraints. We tested the time required for find a solution for different values of $n$ and we got the following performance results: <table> <thead> <tr> <th>$n$</th> <th>Time (ms)</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>45</td> </tr> <tr> <td>12</td> <td>141</td> </tr> <tr> <td>14</td> <td>646</td> </tr> <tr> <td>20</td> <td>52,000</td> </tr> </tbody> </table> It is important to keep in mind that this is a contrived case to demonstrate the worst possible case for resolution. There may very well be other algorithmic approaches that will find identical solutions but search more efficiently. But given what we know about Modelica libraries and their dependencies, we found this performance more than sufficient for our application. One last point worth making about the implementation of impact has to do with security. In order to generate an index from GitHub repositories, it is necessary to crawl repositories. In order to accomplish this, many API calls are required. GitHub will only allow a very limited number of “anonymous” API calls. This limit will be reached very quickly by impact. In order to increase the number of allowed API calls, GitHub requires an “API key” to be used. Such a key can be provided to impact but it cannot be provided via a command line option or a configuration file. This is to avoid this sensitive information being inadvertently recorded or exposed (e.g., by committing it to a version control repository). Instead, such tokens must be provided as environment variables. The impact source code is licensed under an MIT license and is hosted on GitHub. The GitHub repository (Xogeny 2015) includes a LICENSE, README.md and CONTRIBUTING.md which provide a detailed license, introductory documentation and instructions for contributors, respectively. We’ve linked the GitHub repository to a continuous integration service so that each commit triggers tests and emails out build status to the maintainers. 7 impact on library developers What does all this mean for library developers wanting to make their library accessible via impact? Let us first have a look at the past “sins” that were restricting the development work on Modelica libraries. 7.1 Observations 1. We noticed that the MODELICAPATH concept is not properly understood by the users and often gets in their way. Therefore we should not rely on it but rather work with all of our files collected into a working directory (which should always part of the MODELICAPATH and made first priority for the look-up in the tool). 2. If we go away from having to collect all Modelica libraries in the MODELICAPATH then there is no longer a need to store the version number with the library folder name. I.e., simply “<PackageName>” is sufficient and no need for “<PackageName> <Version>”. 3. Until now, we advised the lib developers to keep the current development version in a master branch and merge master into a release branch where the directory structure can be changed (e.g., into “<PackageName> <Version>” and any generated content can be added). Finally, developers should then place a tag on the release branch. This was done for the following reasons: - The link to the tag provided a (tar)zip file that contained the library with the “<PackageName> <Version>” format ready to be used with MODELICAPATH. However, we no longer need to rely on MODELICAPATH any more we don’t need to add the <Version> identifier to the folder anymore. - If we would like to see what stage a certain release in master was at, then we needed to either inspect the git history (following backwards from the release tag) or use an additional tag (e.g., “1.2.3-dev”) which is rather cumbersome and seems unnecessary. But since GitHub also supports new alternatives (see below) there is no longer a need for a specific release branch. That is to say, library developers can still use it if they think it useful but they don’t have to anymore. 7.2 Repository structure recommendations There are new features/mechanisms made available both by GitHub and impact: • GitHub’s support for assets (GitHub-Blog 2013) allows us to upload additional files to tagged releases • impact does not use the MODELCAPATH model but rather uses a “one working directory per project” approach where (one version) of all required libraries and their dependencies live in one (working) directory. We recommend that library developers make the most out of the new features above and change the structure in which they organize their library repositories. 1. Get rid of the release branch as long as it was only for the sake of providing a download-able zip-file with a customized structure or providing additional generated files. Instead use the new GitHub Releases (GitHub-Blog 2013) which allows uploading of additional assets for download. • E.g., rather than adding HTML documentation to the Resources sub-folder and committing this to the release branch and then tagging it, tag the master branch and then generate a zip-file which contains that state and add the generated files to the tagged release. GitHub also provides some information on “Creating Releases” (GitHub-Help 2015a) and there exists, for example, the aktau/github-release tool (Hillegee 2015) to help automating that process. • Another benefit of the release assets is that the GitHub API (GitHub-Developers 2014) allows you to get the download count for your releases (GitHub-Help 2015b). This was not possible for the simple tagged-zip-ball downloads. 2. Get rid of the <PackageName> <Version> formatted folder names. The version number does not belong in the master (i.e., development) branch anyway and the version annotation is contained in the version annotation which tools will happily display for you. When you install a package with impact it will strip that version number in any case. 7.3 Changes for the library listing The listing of Modelica libraries on https://modelica.org/libraries is generated by parsing the GitHub API and creating a static HTML file that contains all information with links. Currently it is a stand-alone Python script but we are thinking of adding this functionality as a sub-module to impact itself. Up to May 2015 the listing pointed directly to the (tar)zip-ball URL of the latest tag of a library. This worked fine if the library used the old release branch model where the “ready to install” version was placed. Clicking on that coloured version link resulted in a direct file download. This has now been changed in such a way that if one clicks on the listed “Last Release” button one will get redirected to the “Releases” page of that project showing the last release. This has the advantage that one does not immediately download the (tar)zip ball but gets to see proper release notes first and is given a choice of what version of a release to download (e.g., pure source distribution of that tag, customized version with additional files, different platform dependent versions with pre-compiled binaries). 7.4 Which license is best for your library The Modelica Standard Library (Modelica) (Modelica Association 2013) is licensed under the “Modelica License Version 2.0” (Modelica Association 2008). So in order to stay compatible with the Modelica library most user libraries chose the same license. This seemed like a natural choice. However, there is one problem which is not immediately apparent to most library developers. This is that the “Modelica License Version 2.0” contains the section “4. Designation of Derivative Works and of Modified Works” which says that: “...This means especially that the (root-level) name of a Modelica package under this license must be changed if the package is modified...”. This clause makes perfect sense for a main library like the Modelica library that is developed and maintained by a major group centrally and wants to protect its product name. But what does this mean for open-source projects that no longer are hosted centrally but rather decentralized on platforms like GitHub and GitLab but were contributions no longer are made by committing directly into one central repository? In the de-centralized case contributions are given by first “forking” (i.e., generating a copy of the original repository), modifying that fork and then sending the contribution back via a “pull-request” (i.e., offering the originating project to accept the changes made on the fork). The problem is that the very first step of “forking” the library generated a copy with the identical “(root-level) name” and at a different location. One could argue that this alone is already a violation of the terms of the “Modelica License Version 2.0”. So what should the library developer do? The simplest solution is to not use the “Modelica License Version 2.0” for libraries but rather go for standard licenses (Open Source Initiative 2015) that are more compatible with open source, community driven development (e.g., MIT or BSD licenses). Interestingly, the old “Modelica License Version 1.1” is still suitable for user libraries since it does not contain the restrictions of having to change the package name. So what about “copyleft style” licenses? The most famous copyleft license is the GNU General Public License. People might think this would be a good choice for a license in order to protect parts of their library from being used inside proprietary libraries without any bug fixes and improvements being fed back to them as “upstream” developers. Unfortunately the GPL also forbids that any other non-GPL library (even the Modelica Standard Library) uses the GPL licensed library and is distributed that way. So what about the LGPL, this allows the usage and distribution alongside with other non-gpl libraries. The problem here is that it does not allow static linking. Something that typically happens when one creates a compiled version of a simulation model that uses different Modelica libraries. A typical example would be the generation of an FMU (Modelica Association 2015). A way out of this is the “Mozilla Public License” which is very much alike the LGPL but allows generated code to be statically linked together with non-GPL licensed code. In conclusion, libraries should, if possible, avoid the “Modelica License Version 2.0” as this was primarily designed for the requirements of the Modelica Standard Library. Perhaps there will be a future revision that is adapted to current open-source development models. But until then, we suggest the use of standard licenses along the lines of BSD/MIT or MPL. 8 Future Development 8.1 Dependency Constraints As already mentioned, there is currently no way to express conflicts between different packages. However, it is highly likely that such conflicting pairs will exist as more and more packages are published. For instance, two Modelica models might depend on different, specific versions of an external library that cannot be linked or loaded at the same time, an already published package might contain known bugs etc. Hence, impact could be extended by the means to express conflicts as well. Boender introduces the notions of strong dependencies and strong conflicts to optimize the handling of very large repositories. This kind of optimization might not be necessary in the Modelica ecosystem right now, but could provide helpful performance enhancements in future versions of impact. 8.2 Crawling At the moment, impact is only able to crawl GitHub repositories. There is nothing particularly special about GitHub and/or its APIs. The authors are confident that indices could be constructed for many different storage types. The most obvious next steps for crawling support would be to add support for GitLab and Bitbucket (Mercurial and Subversion) repositories. Pull requests to introduce such functionality are welcome. On a related note, we anticipate there will be many use cases where impact could be useful for closed source projects that involve private repositories. We think this is an important use case and we hope to provide support for crawling such repositories. This would, for example, allow model developers at companies that have made a significant investment in building Modelica related models and libraries to use impact to search and install these proprietary libraries via their corporate intranet. 8.3 Project Details We have already created a number of issues that require users to provide more explicit information about how they want impact to function on a per project basis. For example, when working with forked libraries (where the index contains multiple libraries with the same name), it is useful to use the URIs associated with each library in the index to disambiguate which particular library to use. Furthermore, there may be cases where the user is actually interested in doing development work on the dependencies as well. In such cases, those dependencies shouldn’t simply be installed, they should be checked out from their repository to make modifying and re-committing easier. For these and other project related features, we feel there is a need to introduce another file to provide such additional information that is project specific. 8.4 Web Based Search Other package managers often provide a web site where users can search for a specific package through the web, read documentation, log issues and/or even download the packages. Because impact is organized into libraries (and not just a command line tool), we feel this kind of functionality could be added in the future. 8.5 Installers Finally, when installing software, it is common for developers to distribute “installers” (i.e., executables that, when run, unpack and install the software). Another potential extension of impact could be to generate such installers. In this case, we could once again leverage Go’s static executable generation to build such installers from the index. Instead of installing the needed files locally, the installer could simply bundle them up and attach them to an installation program using one of the many Go extensions (Riemer 2015; Tebeka 2015) for concatenating static content onto executables or simply downloading some pre-specified libraries over the network. 9 Conclusion In conclusion, impact leverages information already available in Modelica source code along with some common conventions in order to help users find and install Modelica libraries. It does this by crawling repositories and indexing their contents. An index of publicly available libraries created by impact is hosted on modelica.org for use by the impact command line tool. If present, the impact command line tool is already used by OpenModelica to help find and install dependencies. By making the impact executables available across platforms and providing a version of the source code that can also be embedded as a library, we hope the Modelica community will benefit from having first class package management capabilities, just like other software eco-systems. 10 Acknowledgements The authors would like to thank Christoph Höger of Technische Universität Berlin, Martin SJölund of Linköping University, Francesco Casella of Politecnico di Milano and Peter Harman of ESi Group for their contributions to this project. References GitHub (2014). Build software better, together. URL: https://github.com/.
{"Source-Url": "https://modelica.org/events/modelica2015/proceedings/html/submissions/ecp15118725_TillerWinkler.pdf", "len_cl100k_base": 11287, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 46261, "total-output-tokens": 12426, "length": "2e13", "weborganizer": {"__label__adult": 0.00023806095123291016, "__label__art_design": 0.0002884864807128906, "__label__crime_law": 0.00019681453704833984, "__label__education_jobs": 0.00045418739318847656, "__label__entertainment": 4.76837158203125e-05, "__label__fashion_beauty": 8.797645568847656e-05, "__label__finance_business": 0.00015735626220703125, "__label__food_dining": 0.00019168853759765625, "__label__games": 0.00032210350036621094, "__label__hardware": 0.0003006458282470703, "__label__health": 0.00016033649444580078, "__label__history": 0.000148773193359375, "__label__home_hobbies": 6.0439109802246094e-05, "__label__industrial": 0.00017309188842773438, "__label__literature": 0.00017559528350830078, "__label__politics": 0.00013566017150878906, "__label__religion": 0.00025177001953125, "__label__science_tech": 0.0037136077880859375, "__label__social_life": 8.094310760498047e-05, "__label__software": 0.00997161865234375, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0001493692398071289, "__label__transportation": 0.0002130270004272461, "__label__travel": 0.0001327991485595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53452, 0.0606]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53452, 0.30007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53452, 0.92166]], "google_gemma-3-12b-it_contains_pii": [[0, 4143, false], [4143, 9776, null], [9776, 14612, null], [14612, 19465, null], [19465, 24465, null], [24465, 29463, null], [29463, 33192, null], [33192, 35214, null], [35214, 40039, null], [40039, 45183, null], [45183, 50222, null], [50222, 53452, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4143, true], [4143, 9776, null], [9776, 14612, null], [14612, 19465, null], [19465, 24465, null], [24465, 29463, null], [29463, 33192, null], [33192, 35214, null], [35214, 40039, null], [40039, 45183, null], [45183, 50222, null], [50222, 53452, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53452, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53452, null]], "pdf_page_numbers": [[0, 4143, 1], [4143, 9776, 2], [9776, 14612, 3], [14612, 19465, 4], [19465, 24465, 5], [24465, 29463, 6], [29463, 33192, 7], [33192, 35214, 8], [35214, 40039, 9], [40039, 45183, 10], [45183, 50222, 11], [50222, 53452, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53452, 0.1069]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
5f527632d9192391465ae199b0a6ad0ffcf7b60f
1.1 Groupware: Systems that Support Computer-Mediated Interaction A famous definition of the term groupware defines groupware systems as "intentional group processes plus software to support them" (Johnson-Lenz and Johnson-Lenz, 1981). This definition includes various aspects that we have to consider when designing groupware solutions: - The core of the definition is the group. A group of users wants to interact using groupware. Naturally, the members of the group should play an important role in the design of groupware. The groupware design has the purpose of creating a solution that satisfies the user's needs. End-user requirements therefore have to be the central issue during groupware design. - The group interacts in an intentional group process. The interaction between people needs to play an important role in the design of any groupware solution. It has to become clear who interacts with whom. How strict the intentional group process is must be considered, ranging from unplanned interaction in virtual communities up to formally structured workflows in a distributed workgroup. - The process is supported by software. The fact that software is mentioned third here emphasizes that the software itself should be a supportive facility to ease the interaction between people. The software should be adapted to the users' needs to best fulfill its supportive role. At this point the software developer comes into play. As software supports the group process, the software developer should support the users in adapting or creating software that fits the process. Compared to a focus on design, which has the goal of supporting the group in the manipulation of content, support for social group interaction needs a broader focus, that of the relationships between users. Tools for manipulation are in most cases used by one user (even in collaborative systems), so they affect the relationship between the user and shared artifacts. Social interaction, on the other hand, affects the relationships between users and needs to address issues like trust and privacy. In contrast to the design of tools for the manipulation of artifacts, which mainly affects human-computer interaction, the focus should thus be on human-computer-human interaction (HCHI). The design of tools therefore focuses on the interaction of the user with the artifact, and considers the human-human interaction as a marginal aspect. To provide customized designs of groupware mechanisms, we have to make use of a design process that is flexible enough to adapt to the group's needs. Experiences with the design of single user applications have already shown that many software development projects fail because of requirements inadequacies (Dorfman, 1997). In such cases, the customer is typically involved in the early stages of the project as a source of design requirements. This set of requirements is then implemented by the software developers and subsequently the customer assesses the result. However, if the requirements were not specified correctly, customers receive a product that does not match their needs. This means that requirements in the context of computer-mediated interaction must always address social aspects as well as technical aspects, which is why they are called socio-technical requirements. Unfortunately, these socio-technical requirements are often less clear to the stakeholders involved in the development of groupware applications. Two factors make this part of groupware development difficult: - While in single user tasks, such as word processing or image editing, only one actor interacts with an artifact, groupware needs to support the interaction of many users with each other. An interaction partner is thus not a technical, deterministic, artifact, but a non-deterministic human. - Users are not as familiar with using these new opportunities for interaction compared with single-user applications. The theory of socio-technical design views a community from two perspectives: the social system, including group processes, roles, and flow of information, and the technical system, which includes tools used within the community, such as IT infrastructure or buildings. From a socio-technical perspective these two systems are highly interrelated. A socio-technical perspective on groupware design has to be aware of three key aspects (Bikson and Eveland, 1996): - It is difficult to predict the reciprocal effect of changes to either the social or the technical system. - The process used to create the socio-technical system will affect the acceptance of the system. - Both social and technical systems change over time. The tools in the technical system, i.e. the software that supports intentional group processes (Johnson-Lenz and Johnson-Lenz, 1981), can be classified in many different ways. One popular way to classify groupware is to distinguish how it support groups. Teul et al. (1995) introduced such a model and distinguish between three different main support functionalities: 1. Communication focuses on the information exchange between cooperating group members. 2. Coordination concentrates on coordinating group tasks. 3. Cooperation adds the ability to accomplish group goals to the above support functionalities. As all main support functionalities start with the letter C, Borghoff and Schlüchter (2000) later on called this approach 3C-classification. In their initial proposal Teufel et al. (1995) positioned the three main functionalities in a triangle to cluster groupware applications in system classes of common functionality, i.e., communication systems, workflow management systems, shared information spaces, and workgroup computing systems. In contrast to the initial approach of Teufel et al. (1995), we propose a different approach. Figure 1.1 places well-known groupware applications in two-dimensional space. The vertical axis denotes the application’s support for coordination, while the horizontal axis is used to denote the degree of communication and cooperation that an application supports. This is possible because a higher degree of communication implies a lower degree of cooperation. By placing an application in this two-dimensional space, the individual degree of communication, coordination, and cooperation can be visualized much better for each of the application types. In particular, we distinguish the following groupware applications in Figure 1.1: - **Audio/Video conferencing** tools allow users to communicate by various means, so they have a high degree of communication and a low degree of cooperation. Compared to workflow management systems, for example, they do not explicitly offer functionality for scheduling or organizing tasks. We thus see them at a medium degree of coordination. - **Chat** tools have a lower degree of communication than audio/video conferencing tools, as non-verbal information is omitted when communicating via a chat application. For the same reason, the degree of coordination is reduced. - **Group Decision Support Systems (GDSS)** are explicitly designed to support groups in decision-making. For that purpose, they offer synchronous as well as asynchronous communication tools, group votes, etc. They therefore have a high degree of communication and coordination. - **E-mail** is the most popular groupware application. E-mail can be used for many purposes, but its main purpose is to support communication. As the communication is asynchronous and text-based, the degree of communication is reduced compared to chat tools. However, as users can structure their information when using e-mails, the degree of coordination is increased. - **Forums** allow users to discuss a topic in which they are interested. The communicating group is therefore defined by the topic. Compared to e-mail, communication is more public. However, if used in a company, forums allow coordination of a group that is cooperating on a common task. - **Community Systems** integrate a variety of tools and allow a large group of users, i.e., a community, to communicate, to share information, or to coordinate common activities. Often, these tools are web-based. Compared to the tools listed above, community systems have better support for accomplishing and coordinating group goals. However, the degree of communication possible is lower, as there is no possibility of communicating directly with individual community members. - **Wikis** are web-based systems that allow users to change the content of the web pages. Wikis have their origin in the design patterns community. The first Wiki was the Portland Pattern Repository, which was created in 1995 by Ward Cunningham. As Wikis allow users to create and share content, they have a high degree of cooperation, but as they do not explicitly support communication or coordination, they are low in these respects. - **Shared workspaces** such as BCSW (see Section 1.1) allow users to share content. In the most cases, they also allow structuring of the shared content to coordinate common tasks. For that reason, shared workspaces have a higher degree of cooperation and coordination than Wikis. - **Multi-player games** are becoming more and more popular. They allow users to solve tasks or quests jointly, and support a number of coordination functionalities for that purpose. Communication is mainly short and used only for coordination, which explains the degree of communication, coordination, and cooperation they exhibit. Workflow Management Systems (WFMS) are tools that allow modeling, coordination, supervision, and evaluation of a workflow by a cooperating team. For that reason they exhibit the highest degree of coordination of all tools. As their main purpose is to coordinate users in accomplishing a group goal, they have also a high degree of cooperation. WFMS only use communication for coordination purposes, for example to pass on a task or to notify about a completed task, so they show a quite low degree of communication. Multiuser editors such as CoWord (see Section 1.2) allow cooperating users to create a shared artifact synchronously, for example a text document, drawing, or a spreadsheet, and thus accomplish group goals. This explains the high degree of cooperation of such tools. Multiuser editors use a lot of coordination functionalities as well, for example to avoid conflicting changes. Communication is not explicitly supported, thus the degree of communication is low. Apart from the various main functionalities that are supported by a groupware application, awareness plays an important role. Of the tools listed above, multiuser editors, for example, make use of awareness widgets that show the working area of other users, with the goal to avoid conflicting changes in a shared artifact. Awareness can be seen as a mediator between these three main functionalities. ![Diagram](image) **Figure 1.2** Relationship between communication, coordination, cooperation, and mediating group awareness Gerosa et al. (2004) describe this as shown in Figure 1.2. In this figure, cooperating users must be able to communicate and to coordinate themselves. When communicating, users might generate commitments and define tasks that must be completed to accomplish the common group goal. These tasks must be coordinated so that they are accomplished in the correct order and at the correct time with respect to possible external restrictions. To accomplish these tasks the users have to cooperate in a shared environment. However, while cooperating, unexpected situations might emerge that demand new communication. In such communication new commitments and tasks might be defined, which again must be coordinated to be accomplished in cooperation. Apart from this cyclic aspect of cooperation, Gerosa et al. place awareness in a central position in Figure 1.2. Every user action that is performed during communication, coordination, or cooperation generates information. Some of this information involves two or even more users, and should be made available to all cooperating users so that they can become aware of each other. This helps to mediate further communication, coordination, and cooperation. Based on this information, users are able to build up a shared understanding of their common group goals and to synchronize their cooperation. Now that we have clarified our understanding of groupware, the following section presents a scenario in the not too distant future. This will serve as a running scenario throughout the book. It will relate the patterns in the book to a practical example and show how they can be applied in the scenario. 1.2 A Day with Paul Smith Join us on a ride with a time machine. Our destination is a typical working day in the life of Paul Smith. Paul is a software engineer and works in the software development department of a leading entertainment device company in London. Currently, Paul is the project leader in the COGE project in which a Cooperative Game Engine is being developed. Paul's company has subsidiaries all over the world. The members of Paul's team are distributed as shown in Figure 1.3. One team of developers is located in Rio de Janeiro, one in London, and a third in Hong Kong. The main customer is a large game manufacturer located in Germany, which has the goal of building an educational game that helps better understanding of water supply in African countries. The game manufacturer has a group of African pilot users located in Ethiopia and Malawi. Most of the projects in Paul's department are performed in teams to benefit from the synergy of people with varied expertise. Currently, the following interactions are present in the COGE project: the developers from London continue to work on the results that were created only hours before in Hong Kong. Both software teams communicate to plan the internal architecture of the game engine. In other meetings, the London team collaborates with the German customer, which integrates the game engine in its project. The German customer also communicates with some of the developers in Hong Kong or Rio if time allows interaction. Finally, the German customer interacts with their pilot users and collects suggestions from them on how the game could be improved. For their common tasks, team members interact daily using their computing devices. Let's now take a look how Paul's typical working day starts. 6:30 AM. The alarm clock rings and Paul gets out of bed. After a shower and a shave, Paul prepares his breakfast. While eating his cereal and enjoying his freshly brewed coffee, Paul has a look at his electronic newspaper (see Figure 1.4). The electronic newspaper shows Paul the latest news in specific categories in which he is interested. Paul is an enthusiastic member of the pattern community and participates in an online community that writes, discusses, and shares patterns. He has therefore configured a special section in his electronic newspaper that shows him the latest pattern community news and information about his buddies. The daily report tells Paul what has happened in his online community and allows him to keep track of interesting discussions. A sidebar in the newspaper shows Paul's buddy list. As some of Paul's buddies are already awake, Paul has a short chat with them and agrees to arrange a meeting in the evening. To plan his working day, Paul checks his main tasks for the day and the achievements of his colleagues during the night. In Hong Kong they have solved one of the major problems with the network protocol for the new cooperative game engine. However, the solution has raised some new problems in a module that is developed by the team in Rio. Paul therefore decides to announce a meeting with the colleagues in Rio for the afternoon. He enters the collaboration space and sends invitations to those involved. 8:30 AM. Paul leaves his house in a small neighborhood in the London suburbs, gets into his car, and sets off for his office in the city. In the car Paul recalls the destination from his favorite destinations folder. The navigation system of the car not only connects to GPS satellites, but also to the Internet to plan the best route into the city. It uses GPS to detect Paul’s position and the Internet to avoid traffic jams. Additionally, Paul sends his route to his office in an online travel portal that mediates travel mates. Travel mates are selected not only according the destination but also according to Paul’s topics of interest. The latter is quite important for Paul, as he does not want to share a ride with someone with whom he has nothing to talk about. In most cases, this allows Paul to pick up a travel mate on his way into the city. Figure 1.5 Paul looks for a new travel mate This morning the travel portal suggests a new travel mate (see Figure 1.5). Paul does not know this person, but the portal uses a recommendation system, and the travel mate is ranked as a trustworthy and interesting person. Paul has an additional look at the user gallery and reads the introduction of the proposed travel mate. Paul is satisfied with the suggestion and decides to stop on his way into the city and to pick up the suggested travel mate. The car navigation system calculates the estimated pick-up time and notifies the travel mate. It also keeps the travel mate aware of probable changes so that she does not have to wait too long. 9:30 AM. After picking up the travel mate and dropping her at her destination, Paul arrives in his office. A biometric security check at the entrance proves Paul’s identity, and Paul moves to the project’s group room where he meets his colleagues. Video screens show the offices of colleagues in Frankfurt, Hong Kong, and Rio de Janeiro in a permanent video stream. One of the colleagues in London starts a discussion about the project’s current problems. Paul suggests postponing the discussion until the afternoon when colleagues from Brazil will also be available. Currently, nobody is in the office in Rio, as it is not yet morning there. As plenty of time is left before the general meeting, Paul joins a group that discusses the software architecture of the current project (see Figure 1.6). This group meets in a special room that allows 3D projections. Currently, the group is discussing parts of the architecture for the user interface. Luckily, this group is not affected by the problem that was raised by the solution from China, and makes good progress. Figure 1.6 Paul participates in a virtual reality conference about the software architecture of the current project When the meeting about the software architecture is over, Paul goes to his office to start up his desktop computer. He enters the group’s collaboration space and is pleased to see that everyone has accepted his invitation to discuss the new problems with the network protocol. The collaboration space then notifies Paul about newly received mails, who else is on line in the collaboration space, and open tasks. As the group has decided to use an open awareness concept, Paul can also see what everyone is currently doing by moving his mouse cursor over the images in his buddy list. This information is often used to start a spontaneous collaboration and discussion about ongoing problems. However, teammates who do not want to be disturbed indicate this in the buddy list so that the collaboration space does not allow direct communication. 1:00 PM. After a few more hours of work and a good lunch in the company’s canteen, it is time for the group meeting to discuss the new problems with the network protocol. The video screens show that the necessary people are available at all locations. Paul contacts them and announces the start of the meeting, and all his colleagues move to the group meeting room. This room is equipped with the 3D projector Paul used in the morning. This projector displays video streams for each participant from the various locations (see Figure 1.7), the virtual room for the meeting in the team’s collaboration space, and the current shared documents containing the description of the network protocol. This allows everyone to see each other and the material for discussion. Paul opens the meeting by passing the floor to his colleague Gwan in Hong Kong. Gwan explains how they have solved one of the major problems with the network protocol. To do this, Gwan uses a virtual pointer that allows him to point to the corresponding lines in the source code. Other colleagues can discuss Gwan’s presentation using synchronous chat so that Gwan is not disturbed. They can also annotate the source code and post questions to a blackboard that will be discussed after Gwan’s explanation. Everyone is impressed with Gwan’s presentation, although they know that his solution raises a new problem. After the open questions have been answered, Paul hands the moderation over to Rio de Janeiro and Ana explains the new problem. Ana’s presentation raises a lot of open questions on the shared blackboard. The group clusters the open questions and splits into subgroups to address these question clusters. The subgroups create new virtual rooms in the team’s collaboration space to discuss the open questions. Before the groups retreat to their new virtual rooms, Paul schedules a new meeting for tomorrow for the groups to present their results. 4:00 PM. Paul has his last meeting for the day. David, a colleague from Detroit, visits the lab. After giving David a short guided tour of the office, Paul tells him about the new problems with the network protocol. David starts smiling, as he knows how to solve part of the problem. Paul and David therefore enter the collaboration space and knock at the virtual door of the subgroup that formed this afternoon and whose questions David can answer. David offers himself as a mentor and to explain the technology that can solve part of the problem. Soon, David and the other colleagues are in deep discussion and Paul leaves to do other work in his office. Two hours later, David leaves the lab to catch his flight back to Detroit. The subgroup tells Paul that they have nominated David as an expert for specific topics in the collaboration space. This might help David with his next evaluation and wage bargaining. 8:00 PM. Paul has finally finished his most important tasks for the day. He uses his MDA¹ to connect to his online community. As soon as he is online, his friends contact him. They had thought that Paul had forgotten about their appointment. Paul had, and excuses himself for being late. Paul’s friends suggest watching a movie in one of the new cinemas downtown. A quick vote shows that all agree. They run a recommender system for movies, and after a short discussion agree on what to see (see Figure 1.8). Adriana offers to buy the tickets and reserve the seats in the cinema’s online booking system. So a long working day finally ends, and Paul leaves his office to watch a movie with his friends. We can step back into our time machine and go on a short ride back to the present. ¹MDA is an abbreviation for mobile digital assistant which is a combination of a mobile phone and a personal digital assistant (PDA). Chapter 1 Introduction 1.3 Outline The scenario of Paul Smith shows one vision of the future. Our main prediction is that in future people will interact more and more using computing devices. In combination with software these computing devices will mediate interaction among people. As the overview of groupware approaches shows, the scenario is not too far in the future, as most of the computer-mediated interaction it describes already happens in our lives, although not as an integral part of daily life. To mention a few, Paul's day starts with a look at the Periodic Report of his favorite online community, then at his Buddy List to see who else is already on line. The team is using a collaboration space that is based on virtual Rooms, Paul's colleague David acts as a Mentor, and finally Paul and his friends use a recommender system with Letters of Recommendation to select a movie for the evening. The terms set in SMALL CAPS are patterns that are part of our pattern language for computer-mediated interaction. These and other patterns can be found in different chapters of this book, which is structured as follows: Chapter 2 From Patterns to a Pattern-oriented Development Process introduces the reader to the theory of patterns. It looks at the original and more recent publications by Christopher Alexander. Using an end-user centered view, we transfer ideas to the domain of computer-mediated interaction. This results in a pattern form that is different than the pattern forms used in more technical pattern languages. While technical pattern languages use design diagrams or code fragments to illustrate solutions, we prefer a narrative way of presenting the patterns. This ensures that both end users and developers will be able to read the solution. In the remaining part of this chapter, we will introduce OSDP, a pattern-oriented process for groupware development, which is based on piecemeal growth via short design and development iterations, as well as frequent diagnosis and reflection on working practices and how the application supports them. Chapter 3 Community Support describes patterns at a high level of abstraction. The patterns in this chapter describe group processes and the use of computer technology to support such processes. Its main focus lies on the early phases of the group process. It answers questions such as: - How to arrive in the community - How to find out what is interesting in the community - How to protect users Chapter 4 Group Support provides patterns at the user interface level of a collaborative application. The patterns are both technical (describing how to design group interfaces) and social (elaborating on successful application of groupware technology). Problems solved are: - How to modify shared material together - How to shape places for collaboration - How to organize textual communication - How to become aware of other user's actions - How to notice absent participants Chapter 5 Base Technology discusses the technical layer of groupware applications. The patterns are mainly technical and answer the questions: - How systems bootstrap for collaboration - How systems manage shared data - How systems ensure data consistency Chapter 6 Examples of Applying the Pattern Language presents two case studies, one on BCSW and another on CoWord. These case studies show how group interaction can be supported by HCHI technology. The goal of this chapter is to put the patterns together and to illustrate how they are used by two well-known groupware applications. Chapter 2 Related Work This chapter discusses literature that is relevant to the proposed research. It begins by providing a high-level discussion of computer-supported cooperative work (CSCW) and groupware, and then current approaches used to design groupware systems are discussed. This chapter is divided into the following sections: Computer-supported cooperative work, Groupware design and Mobile groupware design recommendations. 2.1 Computer-supported cooperative work and groupware Computer-supported cooperative work is a research area concerned with how computer systems should be designed to support group work and with the effect those systems have on group work patterns (Dix et al., 1998). The applications that are designed to support group work are often referred to as groupware, which has been defined as “computer-based systems that support groups of people engaged in a common task (or goal) and that provide an interface to a shared environment” (Ellis et al., 1991). The notions of a common task and a shared environment are crucial to this definition. This excludes multiuser systems, such as time-sharing systems, whose users may not share a common task. Note also that the definition does not specify that the users be active simultaneously. A wide variety of groupware systems have been developed in recent years, and some have received widespread acceptance while others have had limited success (Grudin, 1994). Some examples include: - Electronic mail (Sproull, 1993) - Group calendars (Lange, 1993) - Telemedicine applications (Horsch and Balbach, 1999) - Co-authoring tools (Neuwirth et al., 1993) - Group drawing tools (Greenberg et al., 1993) - Audio- and video-conferencing tools (Bly et al., 1993) - Workflow systems (Ellis, 1999) - Instant messaging (Isaacs et al., 2002) - Newsgroups and network communities (Shneiderman, 1998) - Tabletop display groupware (Scott et al., 2003) - Shared window systems (Lauwers et al., 1993; Lauwers and Lantz, 1993) - Electronic meeting systems (Mentei, 1993; Nunamaker et al., 1993) - Collaborative virtual environments (Hindmarsh et al., 1998) Groupware systems are often classified according to the type of collaboration that they support. In this classification scheme, collaboration has a temporal and a spatial dimension, and these dimensions are commonly shown using the time-space matrix in Table 2.1 (Baecker, 1993; Dix, 1998; Ellis, 1999; Preece et al., 1994; Shneiderman, 1998). According to the matrix, modes of interaction differ along a time dimension and can be either synchronous (occurring at the same time) or asynchronous (occurring at different times). They also differ along a place dimension, and can be co-located (collaborators are in the same location) or distributed (collaborators are in different locations). Table 2.1: Time-space matrix <table> <thead> <tr> <th>Place</th> <th>Time</th> </tr> </thead> <tbody> <tr> <td></td> <td>Same time</td> </tr> <tr> <td>Same place</td> <td>Face-to-face (tabletop displays, meeting support tools)</td> </tr> <tr> <td>Different places</td> <td>Synchronous distributed (shared editors, video- and audio-conferencing tools)</td> </tr> </tbody> </table> In the next sections, I briefly discuss each of the four types of groupware shown on the time-space matrix. The discussion is organized according to the following themes: - Synchronous distributed groupware - Synchronous co-located groupware - Asynchronous distributed groupware - Asynchronous co-located groupware ### 2.1.1 Synchronous distributed groupware Synchronous distributed groupware allows users to work together at the same time even though they are in different locations (Baecker, 1993). Most of these applications provide shared workspaces where group members can create and edit shared artifacts such as images, documents, or agendas (Gutwin and Greenberg, 1999). These applications usually include real-time communication support using voice, video, or text messaging (Dix et al., 1998), and awareness features are often incorporated into the workspace to help each group member to understand others’ activities (Dourish and Bellotti, 1992; Gutwin and Greenberg, 1996). A number of synchronous groupware tools have been developed to allow collaboration between physically distributed workers. Groupware toolkits such as TOP (Guerrero and Fuller, 2001), Rendezvous (Patterson et al., 1990), GroupKit (Roseman and Greenberg, 1996), and COAST (Schuckmann et al., 1999) are all intended to help developers build real-time groupware applications. Additionally, many groupware applications provide features that allow collaboration... across a distance such as videoconferencing tools (Okada et al., 1994) audioconferencing tools (Rodenstein and Donath, 2000), shared whiteboards (Streitz et al., 1994), and shared editors (Olson et al., 1993). 2.1.2 Synchronous co-located groupware Synchronous co-located groupware systems support face-to-face interactions between two or more collaborators. These systems help groups generate ideas and understanding, and common areas of support are research environments, design tasks, management meetings, and brainstorming sessions (Dix et al., 1998). These systems can provide users with a single shared interactive display (Kruger et al., 2003) or with separate individual networked clients (Bruce et al., 1992). A range of synchronous co-located groupware systems have been developed. For example, Foster and Stefkik (Foster and Stefkik, 1986) developed Cognote to support idea generation in team meetings, and each team member has a separate networked client that allows them to enter new information into a shared information space. Pedersen (Pedersen et al., 1993) developed Tivoli, a single-display groupware application that uses a whiteboard metaphor. Users interact with the system’s large display using a stylus, and the system allows the group to save and organize their work in several different workspaces. 2.1.3 Asynchronous distributed groupware Asynchronous distributed groupware allows distributed groups to collaborate whenever it suits each member’s schedule (Pankoke-Babatz and Syri, 1997; Manohar and Prakash, 1995). This approach frees them of the need to schedule common times to use the application, as is seen in real-time groupware applications. Information persists in the system so that it is available to users, regardless of the access time. Most asynchronous distributed groupware systems use client/server architecture, and information about the group’s activities is stored on the server so that client applications can retrieve updates whenever it suits the user’s schedule (Pankoke-Babatz and Syri, 1997). As users interact with the client application, information is passed on to the central server so that it is available to others. This strategy is used in a number of systems including TeamRooms (Roseman and Greenberg, 1996) and GroupDesk (Fuchs et al., 1995). On a more limited scope, USENET and bulletin board systems provide a central shared space for group communication. 2.1.4 Asynchronous co-located groupware Asynchronous co-located groupware systems support collaboration between people at a single site, but at different times. These systems provide a central location for collaboration support, and users interact with the systems when it suits their schedule. Asynchronous co-located groupware systems are varied in their architectures and uses. For example, GeoNotes (Espinoza et al, 2001) allows users to place virtual notes that are attached to real world locations. The notes can be accessed by others when they visit that location using mobile phones and PDAs, and workers are alerted when they come into close physical proximity with a note. Dix et al. discuss argumentation tools that are used by design teams to record design decisions and arguments that led to those decisions (Dix et al., 1998). These systems are typically used at a single site, and workers commonly utilize the system asynchronously. 2.2 Mobile groupware With recent shifts toward increased mobility in the Western workforce (Dahlbom and Ljungberg, 1998), mobile collaboration has increasingly become an important issue in CSCW. However, efforts to understand the implications that mobile work and mobile collaboration have for the design of technology are still a research subject (Alarcon et al., 2006; Aldunate et al., 2006; González et al., 2005; Guerrero et al., 2006; Perry et al., 2001). Mobile groups are highly varied in the ways they organize work, in the physical dispersion of mobile workers, and in the styles of collaboration that take place between workers (Andriessen and Vartiainen, 2006; Luff and Heath, 1998; Wiberg and Ljungberg, 1999). To help make sense of this diversity, exists efforts to describe and classify these variations by focusing on specific types of mobility (Kristoffersen and Ljungberg, 2000), types of physical distributions that occur in mobile groups (Luff and Heath, 1998), and levels of coupling between mobile collaborators (Pinelle and Gutwin, 2005; Churchill and Wakeford, 2001). Luff and Heath consider the question of physical dispersion of workers in mobile settings, and they identified three types of mobile distributions: micro-mobility, local mobility, and remote mobility (Luff and Heath, 1998). Micro-mobility is described as the way an artifact can be moved and manipulated in a relatively circumscribed, “at hand” domain, but it is also suggested that it includes “ways of providing and receiving information whilst co-present with others”. Local mobility describes mobility around a single worksite. For example, an individual might move between different rooms or floors in a building. Remote mobility describes individuals who move around different locations or worksites. Remotely mobile groups differ from the other types of groups on the CSCW time-space matrix since the time and place dimensions vary depending on each worker’s location and schedule. Collaboration in these groups has many of the same problems that are encountered in stationary distributed groups (Gutwin and Greenberg, 1999; Mark, 2002). However, since place and schedules vary, it is also difficult for workers to stay aware of others’ locations and availabilities (Fagrell et al., 2000; Pinelle and Gutwin, 2005), and it can be difficult for workers to establish any type of intentional synchrony, even when technologies are utilized (Brown and Chalmers, 2003; Brown and O’Hara, 2003). In spite of ongoing advances in mobile computing platforms and networks, technical hurdles make it difficult to develop groupware for remotely mobile groups. In groups, members often need to coordinate their activities, stay aware of others’ activities, and explicitly communicate with each other (Pinelle and Gutwin, 2005); However, the wide area wireless networks that are needed to support remote mobility are less reliable than wired networks (Satyanarayanan, 1995; Edwards and Mynatt, 1997; Edwards et al., 1997), and group interaction is often challenging to support when synchrony and timeliness of information is an issue. For mobile workers who work across a wide area, both interference and signal strength change frequently due to changes in location as well as natural variability. Some of the direct effects are periodic disconnections, loss of data, and long delays due to congestion, retransmission, or low bandwidth. Several techniques have been offered that lessen some of these consequences under particular circumstances. Data replication (Mascolo et al., 2002a; Ratner et al., 2001) and caching increase availability of information during periods of disconnection and reduce delays. Consistency problems can be mitigated using optimistic replication schemes (Satyanarayanan, 2002), automatically resolving conflicts when they happen, and representing conflicts to the user. Adaptive strategies allow systems to make better use of their available resources, which can also lessen delay problems and help to make smooth transitions from connected and disconnected states. Although these techniques have made many mobile collaboration problems more manageable, it is still difficult to mitigate, predict, and cope with wide area mobility problems at the user, application, and infrastructure levels (Jing et al., 1999). At the application level, mobility issues have been addressed using asynchronous groupware that allows workers to carry out their work offline since network access may only be available intermittently (Litiu and Zeitoun, 2004; Fagrell et al., 2000; Kistler and Satyanarayanan, 1992). In this approach, work is carried out on a client application that can be disconnected from a centrally accessible server, and the work is stored until a network connection is available. When network access becomes available, the client and server “synch up.” Local work is forwarded to the server so that it is available to others, and the server sends the user information about others’ activities. When stored data conflicts with changes that others have made, conflict resolution techniques may be utilized. Several systems use this approach, including Coda (Satyanarayanan et al., 1990; Kistler, 1996), Bayou (Terry et al., 1995), FieldWise (Fagrell et al., 2000). 2.3 Groupware design Groupware designers must deal with the challenges of developing systems that support complex human-human interactions, and that fit target groups’ tasks and their social and organizational work contexts. The need to account for human-human interaction in groupware designs means that traditional design approaches are often inadequate for developing software to support groups. To address this need, groupware designers have adopted four different approaches to design: 1) incorporate social science approaches into the design process, 2) use single user design approaches that consider users and their work contexts, 3) use groupware-specific analysis and evaluation approaches, and 4) use design recommendations and frameworks based on others’ experiences. Social science approaches. Social science theories and approaches have been used to conduct and analyze field observations, and to guide groupware design. Approaches that have been discussed in CSCW literature include: ethnography (Shapiro, 1994; Blythin et al., 1997; Hughes et al., 1994), activity theory (Collins et al., 2002; Miettinen and Hasu, 2002; Fjeld et al., 2002), and grounded theory (Grinter et al., 1999; Grinter 1998; Fitzpatrick et al., 1996). Single user approaches. Several techniques that are used for single user development have been used to design groupware systems (Halverson, 2002). These approaches are based on field observations and on developing an understanding of users’ tasks and work settings. These include: Contextual Design (Beyer and Holtzblatt, 1998), participatory design (Greenbaum and Kyng, 1991; Muller 1991), and user centered design (Norman and Draper, 1986). **Groupware analysis and evaluation approaches.** Several approaches have been developed for analyzing group tasks and/or evaluating the usability of groupware applications. These include: the mechanics of collaboration (Gutwin and Greenberg, 2000), groupware walkthrough (Pinelle and Gutwin, 2002), groupware task analysis (van der Veer and van Welie, 2002), collaboration usability analysis (Pinelle et al., 2003), and heuristic evaluation for groupware (Baker et al., 2002). **Design recommendations and frameworks.** Several design recommendations and frameworks have been created to provide guidance on designing for groups that operate in specific domains or that have specific characteristics (Brown and Chalmers, 2003; Guerrero and Fuller, 2001; Luff and Heath, 1998; Lukosch and Schümmer, 2005; Neyem et al., 2006b; Pinelle and Gutwin, 2005; Scott et al., 1993). These recommendations are commonly based on the observations, experiences, and insights of developers and researchers working in the field. Lukosch and Schümmer discusses that groupware frameworks provide solutions for the development of groupware application, but have properties that complicate their usage and do not sufficiently support groupware developers (Lukosch and Schümmer, 2006). They argue that a pattern approach to support the technical aspects (or design recommendations) of groupware development serve as educational and communicative vehicle for reaching on how to design groupware applications and foster the reuse of proven solutions. Furthermore, they argue that patterns describe solutions to recurring issues in groupware development and foster the communication between developers and end-users, since they need a common language and understanding of the problem space.
{"Source-Url": "https://users.dcc.uchile.cl/~nbaloian/SistCol/Groupware.pdf", "len_cl100k_base": 8848, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 22356, "total-output-tokens": 9681, "length": "2e13", "weborganizer": {"__label__adult": 0.00034499168395996094, "__label__art_design": 0.001377105712890625, "__label__crime_law": 0.0002536773681640625, "__label__education_jobs": 0.005550384521484375, "__label__entertainment": 0.00011628866195678712, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.0003962516784667969, "__label__food_dining": 0.00034737586975097656, "__label__games": 0.0008130073547363281, "__label__hardware": 0.00087738037109375, "__label__health": 0.0003857612609863281, "__label__history": 0.0003180503845214844, "__label__home_hobbies": 0.00012290477752685547, "__label__industrial": 0.00030422210693359375, "__label__literature": 0.0005006790161132812, "__label__politics": 0.0001959800720214844, "__label__religion": 0.0003740787506103515, "__label__science_tech": 0.029937744140625, "__label__social_life": 0.00023543834686279297, "__label__software": 0.0268402099609375, "__label__software_dev": 0.9296875, "__label__sports_fitness": 0.00023114681243896484, "__label__transportation": 0.00037169456481933594, "__label__travel": 0.00023448467254638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43721, 0.02352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43721, 0.40596]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43721, 0.94315]], "google_gemma-3-12b-it_contains_pii": [[0, 5449, false], [5449, 9511, null], [9511, 12667, null], [12667, 15897, null], [15897, 20239, null], [20239, 23237, null], [23237, 26789, null], [26789, 28060, null], [28060, 29602, null], [29602, 31607, null], [31607, 34034, null], [34034, 36648, null], [36648, 39313, null], [39313, 41955, null], [41955, 43721, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5449, true], [5449, 9511, null], [9511, 12667, null], [12667, 15897, null], [15897, 20239, null], [20239, 23237, null], [23237, 26789, null], [26789, 28060, null], [28060, 29602, null], [29602, 31607, null], [31607, 34034, null], [34034, 36648, null], [36648, 39313, null], [39313, 41955, null], [41955, 43721, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43721, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43721, null]], "pdf_page_numbers": [[0, 5449, 1], [5449, 9511, 2], [9511, 12667, 3], [12667, 15897, 4], [15897, 20239, 5], [20239, 23237, 6], [23237, 26789, 7], [26789, 28060, 8], [28060, 29602, 9], [29602, 31607, 10], [31607, 34034, 11], [34034, 36648, 12], [36648, 39313, 13], [39313, 41955, 14], [41955, 43721, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43721, 0.03546]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
a1f180e680f67f83869f31191356bf9a28447637
[REMOVED]
{"len_cl100k_base": 14059, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 65819, "total-output-tokens": 17350, "length": "2e13", "weborganizer": {"__label__adult": 0.0004832744598388672, "__label__art_design": 0.00030112266540527344, "__label__crime_law": 0.00033402442932128906, "__label__education_jobs": 0.0006365776062011719, "__label__entertainment": 5.739927291870117e-05, "__label__fashion_beauty": 0.0001825094223022461, "__label__finance_business": 0.0001647472381591797, "__label__food_dining": 0.00036263465881347656, "__label__games": 0.0006437301635742188, "__label__hardware": 0.0008096694946289062, "__label__health": 0.00042057037353515625, "__label__history": 0.00017499923706054688, "__label__home_hobbies": 0.00010663270950317384, "__label__industrial": 0.00032520294189453125, "__label__literature": 0.00022971630096435547, "__label__politics": 0.0002434253692626953, "__label__religion": 0.0004525184631347656, "__label__science_tech": 0.00472259521484375, "__label__social_life": 9.173154830932616e-05, "__label__software": 0.0029773712158203125, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0004429817199707031, "__label__transportation": 0.000644683837890625, "__label__travel": 0.00024080276489257812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59644, 0.04173]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59644, 0.39398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59644, 0.85372]], "google_gemma-3-12b-it_contains_pii": [[0, 3112, false], [3112, 8055, null], [8055, 13189, null], [13189, 17325, null], [17325, 20995, null], [20995, 25854, null], [25854, 28408, null], [28408, 30639, null], [30639, 34884, null], [34884, 39240, null], [39240, 41766, null], [41766, 42238, null], [42238, 43305, null], [43305, 47128, null], [47128, 51396, null], [51396, 54880, null], [54880, 56885, null], [56885, 58770, null], [58770, 59644, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3112, true], [3112, 8055, null], [8055, 13189, null], [13189, 17325, null], [17325, 20995, null], [20995, 25854, null], [25854, 28408, null], [28408, 30639, null], [30639, 34884, null], [34884, 39240, null], [39240, 41766, null], [41766, 42238, null], [42238, 43305, null], [43305, 47128, null], [47128, 51396, null], [51396, 54880, null], [54880, 56885, null], [56885, 58770, null], [58770, 59644, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59644, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59644, null]], "pdf_page_numbers": [[0, 3112, 1], [3112, 8055, 2], [8055, 13189, 3], [13189, 17325, 4], [17325, 20995, 5], [20995, 25854, 6], [25854, 28408, 7], [28408, 30639, 8], [30639, 34884, 9], [34884, 39240, 10], [39240, 41766, 11], [41766, 42238, 12], [42238, 43305, 13], [43305, 47128, 14], [47128, 51396, 15], [51396, 54880, 16], [54880, 56885, 17], [56885, 58770, 18], [58770, 59644, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59644, 0.14058]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
dc3e7ad543d271317ceee9b744baf3443bb1e33b
Automated analysis of security requirements through risk-based argumentation Yijun Yu, Virginia N.L. Franqueira, Thein Than Tun, Roel J. Wieringa, Bashar Nuseibeh The Open University, Milton Keynes, UK University of Derby, Derby, UK University of Twente, Enschede, The Netherlands Lero the Irish Software Engineering Research Centre, University of Limerick, Ireland ABSTRACT Computer-based systems are increasingly being exposed to evolving security threats, which often reveal new vulnerabilities. A formal analysis of the evolving threats is difficult due to a number of practical considerations such as incomplete knowledge about the design, limited information about attacks, and constraints on organisational resources. In our earlier work on RISA (Risk assessment in Security Argumentation), we showed that informal risk assessment can complement the formal analysis of security requirements. In this paper, we integrate the formal and informal assessment of security by proposing a unified meta-model and an automated tool for supporting security argumentation called OpenRISA. Using a uniform representation of risks and arguments, our automated checking of formal arguments can identify relevant risks as rebuttals to those arguments, and identify mitigations from publicly available security catalogues when possible. As a result, security engineers are able to make informed and traceable decisions about the security of their computer-based systems. The application of OpenRISA is illustrated with examples from a PIN Entry Device case study. 1. Introduction Security risks evolve in software-intensive systems. Attackers exploit increasing number of vulnerabilities, ranging from cryptographic protocols to human subjects. Introducing new technologies to such systems often imposes security risks with higher likelihood to do harm to the assets. In practice, security is not perfect due to limited resources available to security engineers, uncertainties about the attackers' skills and commitment, and incomplete knowledge about evolving threats and vulnerabilities. Recent years found structured argumentation approaches effective to build safety cases (Kelly, 1998) and to reason about both formal and informal descriptions of software systems, to demonstrate compliance to laws and regulations (Burgemeestre et al., 2010; Cyra and Górski, 2007), to trace and justify software design decisions (Potts and Bruns, 1988), to establish confidence in software development (Graydon and Knight, 2008), and to build dependability cases to assure compliance in software development (Huhn and Zechner, 2010). Extending the work on security argumentation (Haley et al., 2008), we have developed a framework for reasoning about security requirements of the system where abstract properties are important. For instance, it is possible to formally prove that an access control model will deny access to the Human Resource (HR) database by those who do not work in the HR department. However, real-life phenomena could defy generalisation and abstraction, there the framework needs to support the uses of informal arguments. For instance, many HR employees could share a common password, and when one of the employees leaves the department and the common password is not changed, thus access becomes available to someone who is no longer a member of the HR department. Through the use of Risk assessment in Security Argumentation (RISA) method (Franqueira et al., 2011), we have shown how risk assessments iteratively challenge the satisfaction of security requirements. The main limitation of our previous work lies in that the separate models for formal arguments and risk-based arguments, which hinders the automated tool support. In this work, this limitation is addressed by the means of three contributions of the RISA method. First, we introduce an integrated modelling language to represent risk assessment and arguments uniformly. Second, the OpenRISA tool support extends the OpenArgue (Yu et al., 2011) argumentation tool to perform automated checking... of the formal arguments. Third, we incorporate an automated search functionality to match catalogues of security vulnerabilities such as CAPEC (Common Attack Pattern Enumeration and Classification patterns\(^1\)) and CWE (Common Weakness Enumeration\(^2\)) with the keywords derived from the arguments. Compared to the previously ad hoc search, the new tool supports a complete coverage of these public catalogues of security expertise. The OpenRISA approach has presented a research contribution to represent and reason about risks associated with software security requirements. The argumentation part of the work has been evaluated with an industry evaluator at DeepBlue (Yu et al., 2011). The remainder of the paper is organised as follows. Section 2 reviews relevant background on the satisfaction of security requirements and security arguments, whilst Section 3 reviews related work. Section 4 provides an overview of the RISA method, Section 5 describes the corresponding OpenRISA tool support. Section 6 demonstrates the tool supported method with a PIN Entry Device (PED) example. Section 7 discusses and points to future research and development work. Finally, Section 8 concludes. 2. Background The RISA method builds on the notions of satisfaction of security requirements, and outer and inner arguments, introduced by Haley et al. (2008). 2.1. Requirements satisfaction arguments about the problem depicted in a problem diagram Following the Problem Frames approach in requirements engineering (Jackson, 2001), software system artefacts are separated into S, W, and R, where S represents the specification of a software system, W represents a description of the world in which the software system is to be used (i.e., the context), and R represents a description of the requirements. The software within the system context should satisfy the requirements. This semantics of a requirements problem can be described by the following entailment relationship: \[ W, S \vdash R \] (1) The world context W consists of domains (short for problem world domains); elements of the world can be either physical, such as people and hardware, or logical, such as software and data structure. Typically, W also contains assumptions made about these domains. Using the Problem Frames approach (Jackson, 2001), the analysis of a requirement problem follows the principles of divide-and-conquer (i.e. decomposition) and unite-and-rule (i.e. composition). First of all, the knowledge of the physical domains in the context directly referenced and/or constrained by the requirement statements R are analysed, in order to bring in indirectly related domains to the analysis until a machine specification S is found. Around S, the collection of the domains in the physical world W are depicted in a diagram where nodes represent the domains and edges represent the phenomena shared between the domains. A shared phenomenon could be an event controlled by one domain and observed by another. As such, the so-called problem context diagram, illustrated in Fig. 1, captures the knowledge of high-level causality about the behaviour between these domains. The OpenRISA tool includes the components of OpenPF which support the creation of context diagrams. 2.2. Toulmin-structured arguments and argumentation processes According to Haley et al. (2008), arguments for requirements satisfaction are in two kinds: the first argument constructed, called the outer argument, is a formal proof that relies on certain assumptions about the system context and the specification. The outer argument is formal in the sense that it typically uses a mathematical logic such as the prepositional logic. The assumptions made in the outer arguments are expanded in and supported by the informal inner argument which uses the structured natural language because those assumptions cannot be described in the formal logic. Effective argumentation establishes a claim that one wishes to convince the world of. In other words, claim is the object of an argument, a statement that one wishes to convince an audience to accept. In an effective argument, ground truths or facts need to provide the underlying support for an argument, e.g., evidence, facts, and common knowledge. In other words, ground is a piece of evidence, a fact, a theory, a phenomenon considered to be true. In addition, warrants connect and establish relevancy between the grounds and the claim. A warrant explains how the grounds relate to the claim, but not the validity of the grounds. Structurally, warrant is a statement that links a ground and a claim, showing how a claim is justified by ground(s). Rebuttals express conditions under which an argument does not hold by invalidating any of the grounds and associated warrants, thus undercutting the support to the claim. Grounds and warrants may need to be argued, making them claims of nested arguments, therefore grounds and warrants can also be attacked by the rebuttals. Here, rebuttal is a counterargument which undermines the support for the claim. Specifically in the case of security-related argumentation, rebuttals represent risks. Rebuttals can be mitigated in order to restore the confidence that the claim of the rebutted argument still holds. A mitigation, while negating a rebuttal, introduces additional knowledge to show that the rebuttal, i.e. a risk, can somehow be tolerated. In doing so, a mitigation can also introduce new risks, leading to new iterations of rebuttals and mitigations in argumentation. Mitigating a rebuttal requires an iterative process, which introduces additional arguments incrementally. The notion of round denotes the number of iterations from the beginning. Using the round numbers, cyclic arguments can be avoided by eliminating the redundant facts from the increments at different rounds. Specific to security satisfaction arguments, there are two types of rebuttals, risks and mitigations. In general the argumentation structure iteratively relates inner arguments of logical rebuttals to outer arguments of boundary expansions. Starting from the initial ground about the software system in question, every round of argumentation may introduce additional facts and/or enclose more elements into the system boundary, further the scope of the knowledge. --- \(^1\) [http://capec.mitre.org/]. \(^2\) [http://cwe.mitre.org/]. In summary, each claim of the outer argument can also be represented with Toulmin’s argument structure as \((a \rightarrow b)\), where \(a\) is a ground, \(c\) is a warrant and \(b\) is the claim to be challenged by examining the ground \(a\) and the warrant \(c\). A premise is a logic statement \((\text{Huth and Ryan, 2004})\) as is used in the outer argument. Each premise in an outer argument is the beginning for one thread of inner arguments built from several rounds of inner argumentation. Inner arguments are indicated by nested risks and mitigations in different rounds. In the supported iterative and recursive argumentation process, arguments represented as such are incremental in the sense that any claim can be argued against in the next round of argumentation. They are also non-monotonic in the sense that the new arguments may introduce new information that overrules the accepted facts in previous rounds. 2.3. Risk arguments for security requirements According to the BS ISO/IEC 27005 (2011), information security risk is associated with the potential that threats will exploit vulnerabilities of one or a group of assets and cause harm to an organisation, where asset is “anything that has value to the organisation and which therefore requires protection”. Assets of relevance are the ones “stored in or accessed by the system-to-be” \((\text{Haley et al., 2008})\). Since vulnerability is the element of risk upon which security engineers can mostly act upon, vulnerabilities are our starting point for the identification of risks affecting the system being designed. Therefore, the RISA method is supported by catalogues that describe security vulnerabilities. Mitigation, in risk assessment, refers to treatments that aim at minimising risks by reducing the likelihood and/or the impact. It can act in many ways, e.g., by removing a risk, by changing the negative consequences of a risk, or by avoiding a risk all together; the catalogues also help security engineers in this respect. Implementing security goals, security requirements are regarded as constraints on functional requirements that protect the assets from identified harms, controlling security risks in the software system. For example, to maintain the confidentiality of personnel data while handling the requests for HR data, the security requirement shall guarantee that only those requests coming from the members of human resources staff are considered. In the most general form, the outer arguments show whether properties of \(W\) and \(S\) entail the security requirements (as shown in Eq. (1)) by using the claims about how the system behaviour conforms to the domain behaviour premises of the system; this is captured by Eq. (2) \((\text{Haley et al., 2008})\). \[ \text{domain behaviour premises} \vdash \text{security requirements} \quad (2) \] Expressed in propositional logic, for example, domain experts can reason about the satisfaction of the argument formally. Of course, the choice of logic is not prescriptive: more expressive logics could also be used as long as the underlying reasoning can be supported by translating into the corresponding logic formula. More expressive logics may be used here but they might be less amenable to automated analysis. Outer arguments rely on properties of \(W\) and \(S\) (domain behaviour premises), some of which may turn out to be arguable assumptions. These premises need to be challenged, and be grounded in facts if possible, or taken as true, for instance, on the basis of a lack of contrary evidence. The general purpose of an inner argument is to try to rebut an outer argument. Notice that the outer arguments establish the scope of security assessment, whilst the inner arguments deepen the assessment. The general structure of inner arguments is shown in Fig. 2. Fig. 3 highlights a metamodel of the argument diagram structure used in this work. 3. Related work Related work is organised around structured argumentation, including its process and representation, and risk assessment. 3.1. Structured argumentation Argumentation process provides a rationale to convince an audience that a claim should be considered valid. Three qualities are often discussed in the informal argumentation literature: convincingness, soundness, and completeness. Convincingness relates to whether the argumentation is compelling enough to assure an intended audience that the conclusion reached is reasonable \((\text{Haley et al., 2008})\). Soundness relates to whether the argumentation fulfils the argumentation schema and is based on “true premises” \((\text{Graydon and Knight, 2008})\), i.e. on true grounds and warrants. Completeness relates to whether nothing has been omitted that could lead to a different conclusion about a claim \((\text{Graydon and Knight, 2008}; \text{Shum and Hammond, 1994})\). A known problem in argumentation is the subjectivity involved in identifying arguments and counter-arguments (which relate to soundness), and the difficulty in determining completeness. Proposals to reduce these problems rely on the help of: (i) predefined critical questions \((\text{Atkinson et al., 2004}; \text{Walton, 1996})\), (ii) what-if scenarios \((\text{Baroni et al., 2009})\), (iii) expert assurance checks \((\text{Graydon and Knight, 2008})\), (iv) guidelines and checklists \((\text{Lipson and Weinstock, 2008})\), or (v) how/why questioning, as proposed in \((\text{Haley et al., 2008})\). However, with a few exceptions, for example, \text{Cyna and Górski (2011)}, these approaches either rely on the availability of experts or are rather static. The OpenRISA support provided to the RISA method allows the practical leverage of evolving public catalogues, updated using input by a pool of security experts from several organisations (http://cwe.mitre.org/community/index.html). 3.2. Process of argumentation The process of Toulmin-style argumentation \((\text{Toulmin et al., 1984})\), enhanced with recursion from \text{Newman and Marshall (1991)}, follows a depth-first style and is based on a 2-persons game. It provides a neat and intuitive view about the evolution of an argument in the format of a debate or dialogue between two players. For example, \text{Cohen (1987)} regards argumentation as a dialogue, while \text{Walton (1996)} regards it as a debate where the opponent asks critical questions. This process of argumentation, however, is not suitable when reasoning on security with risk assessment. In this case, it often happens that one risk challenges (i.e. rebuts) several premises, one mitigation rebuts several risks, and several mitigations rebut a single risk. As a result, a breadth-first style of argumentation scales better for practical security reasoning where risks are identified for a bundle or all premises, then mitigations are identified and analysed for all rebuts argument-counterargument by Toulmin. In the presentation by Dung than for the traditional tree representation of arguments are nodes and attacks relations are arcs. Therefore, risk- targets and mitigations to facts and rules in any one of the previous rounds. 3.3. Risk assessment Risk assessment involves distinct steps of risk identification, risk analysis/evaluation, and risk treatment. There are three basic perspectives on risk identification; it can be asset-driven, threat-driven or vulnerability-driven. The asset-driven approach prescribed, e.g., by the BS ISO/IEC 27005 (2011), Hakemi et al. (2014) and CORAS (Lund et al., 2011), considers assets as the primary focus for the identification of risks. The threat-driven approach adopted, e.g., by the NIST standard [SP 800–30 rev. 1, 2012], identifies threats (sources and events) as the starting point for risk identification. Vulnerability-driven approaches, on the other hand, consider vulnerabilities on the target of analysis as attack surface which represent risks; threat agents will, at some point, discover and exploit them. Therefore, it is not imperative to pinpoint the threat agent. Tools based on vulnerability scanning, such as Nexus,\(^3\) and the RISA method that uses catalogues of vulnerabilities take this last approach. Moreover, it adopts a context-based elicitation of risks through argumentation allowing logical traceability of risks and mitigations to premises and security requirements. The identification of risks may be supported by different artefacts and methods. For example, checklists (as in the BS ISO/IEC 27005 (2011) and SP 800–30 rev. 1 (2012)), workshop with stakeholders (as in CORAS (Lund et al., 2011)), threat taxonomies (e.g., OWASP\(^4\)), and in-house catalogues containing information about previous incidents. These approaches have drawbacks; checklists soon become obsolete, workshops are resource-intensive and hard to schedule with stakeholders, taxonomies and in-house catalogues are not comprehensive. The RISA method addresses these drawbacks by considering ever-changing security catalogues maintained by experts around the world. Nevertheless, the OpenRISA tool is flexible enough to be complemented by any of these other approaches. Risk identification in early stages of system development is part of requirements elicitation, and is often threat-driven. Such approaches assume an attacker/misuser abuser as threat agent and elaborate on --- \(^4\) [https://www.owasp.org](https://www.owasp.org). "what can go wrong with the system in the environment" (Raspopin and Opdahl, 2013), i.e., actions performed by the threat agent. One example of such approaches is Misuse Case (Sindre and Opdahl, 2005), which assumes an identifiable sequence of actions performed by a misuser. However, both misuser and actions may not be known, or may be irrelevant. Let us consider the risk PIN is revealed if sent unencrypted within the PED. Now, let us consider the risk PIN is revealed if sent unencrypted within the PED and the PED can be tampered. It states that the risk is derived from the combination of a threat (PIN is revealed if sent unencrypted within the PED), and a vulnerability (PED can be tampered). The threat agent could be identified as a generic attacker, however, the actions performed by the attacker to exploit the vulnerability become irrelevant if the defender choose to mitigate the risk, e.g., by encrypting the PIN inside the PED. Now, let us consider the risk PIN is revealed by missing PIN field masking. The threat can be exercised by exploiting the vulnerability, i.e., missing PIN field masking in PED keypad. In this case, the threat agent is not identifiable (it could be, e.g., a customer or a merchant in a shop, or a waiter in a restaurant) and the steps which lead any threat agent to exploit the vulnerability are neither identifiable nor relevant. Exposing the risks and proposing mitigations to the vulnerability become the rationale of RISA, spelled out by the risk-based argumentation. 4. The tool-supported RISA method An overview of the data flow for a security analyst to use the OpenRISA tool supported approach is illustrated in Fig. 5. To support the analyst, the OpenRISA tool has four major components: (1) a model-based editor to help the analyst to elicit and reason about additional phenomena; (2) a causal analysis to create the outer arguments, using the OpenRISA argumentation tool; (3) the Lucene-based search engine helps the analyst to find relevant risks and mitigations from the domain specific keywords used in the outer arguments; (4) the analyst can then label the assumptions into formal propositions using inner argument logic expressions in the OpenRISA tool, which will then feed into the proposition logic reasoner by the algorithm to check the reasoning. These OpenRISA components can be used iteratively therefore there is no predetermined ordering between them. Due to the data flow dependencies, the natural order is recommended by the steps of the RISA method illustrated in Fig. 6. In the following step-by-step descriptions of the RISA method, we show how the components of the tool are used to support the analyst, as depicted in Fig. 5. The steps extend the process of argumentation for security requirements proposed in the Haley et al. framework by incorporating a process of risk assessment (see Fig. 6). 4.1. Steps 1–3 In Step 1 (Identify Functional Requirements), functional requirements of the system and the system context (domains and shared phenomena) are identified. These requirements may be derived from the higher-level goals of the system. In Step 2 (Identify Security Goals), assets that need to be protected, and security goals are identified. Notice that security goals concern the protection of “valuable assets”, which are specific contextual domains not concerned by generic functional goals. In Step 3 (Identify Security Requirements), security requirements are derived from security goals, and are expressed as constraints on the functional requirements identified in Step 1. System context diagrams (one for each security requirement) are constructed in this step. 4.2. Step 4: construct outer arguments The outer arguments for security requirements are constructed in Step 4 of RISA based on problem diagram analysis. These outer arguments make use of domain properties to refer to the domains in the context of analysis which may expand the scope to reason about additional phenomena. Behavioural premises used in the outer arguments may contain risks, which are identified using a systematic risk assessment process in RISA. This is represented in Fig. 6 by the arrow from Step 4 to Step 5. Steps 5–8 correspond to the process of constructing inner arguments in the Haley et al. framework. These four steps show how domain assumptions in outer arguments are challenged by means of risk assessment based on public security catalogues. Public catalogues provide input for Steps 5 and 6. In the RISA method, we use CAPEC and CWE to feed the identification of risks with descriptions and information about known attack patterns and weaknesses in software, and for information on how these attacks and weaknesses can be mitigated. In-house security catalogues or other methods, such as CORAS’ workshops with stakeholders (Lund et al., 2011), may complement public security catalogues. The recursion of the inner argumentation is represented in Fig. 6 by both the indirect connection between Steps 5 and 7 (through Step 6), which involves the process of finding risks and mitigations to these risks, and the direct connection between Steps 7 and 5, which involves the process of finding new risks in mitigations. Since these risks are attached to arguments and security requirements, prioritising risks indirectly results in the prioritisation of arguments and security requirements. 4.3. Step 5: identify risks In this step, behavioural premises in outer arguments regarding the domains (arrow from Step 4 to Step 5 in Fig. 6) are analysed in terms of potential risks (which rebut the premises). Public security catalogues are then searched to find known security weaknesses and attack patterns. 4.3.1. Automated catalogues search The search in both catalogues is automated as follows. 1. Each XML dataset entry (e.g., CAPEC v2.1: http://capec.mitre.org/data/xml/capec_v2.1.xml, and CWE v2.5: http://cwe.mitre.org/data/xml/cwec_v2.5.xml.zip catalogues) is converted into a text file using the XSL scripts shown in Figs. 7 and 8. 2. Those text files are searched, given keywords, using the Lucene Java library (Hatcher and Gospodnetic, 2004) through a command line script (query.sh). 3. The PorterStemAnalyzer from Lucene, which implements the Porter Stemming algorithm (Porter, 1980), is used to remove common suffixes from keywords, when applicable. For example, keywords “connection”, “connected” or “connecting” will all be reduced to the stem “connect”, and the search for this stem in the text will recover catalogue entries containing any such variations in suffixes common in English. In the following scripts in Figs. 7 and 8, we use XPath expressions such as “/Attack_Catalog/Attack_Patterns/Attack_Pattern” and “/Weakness_Catalog/Weaknesses/Weakness” to extract the natural language descriptions of the XML records about attacks and weaknesses in the CAPEC and CWE catalogues, respectively. The “for-each” clauses in the stylesheets ensure that every textual description in the records are covered. As a result, a single textual file name corresponding to “ID” of the record can be obtained, feeding into Lucene Information Retrieval engine such that any matching documents can be traced back to the corresponding entry. Running a search involves the input of keywords in a query text file (‘query.txt’), one keyword per line, and running the command line query script to execute Lucene’s search functionalities.5 Keywords may be composed of more than one word. 5 The automated search is available in the RISA repository at http://sead1.open.ac.uk/risa/search.php. Therefore, queries query_ab CAPEC and query_ab CWE perform the search for given keywords in the CAPEC and CWE catalogues, respectively, and query_ab CAPEC CWE performs the search in both catalogues. The output is also a text file (hits.txt) containing the triplex (keyword, catalogue, entry reference), one per line. For example: victim CAPEC attack_pattern_89 password CWE weakness_258 4.4. Step 6: identify & classify mitigations This step involves analysing the catalogue entries related to risks identified in the previous step to (i) find appropriate security mechanisms for mitigating them (arrow from the catalogues to Step 6) and (ii) classify these mitigations according to two categories of risk treatment: mitigate-by-system and mitigate-by-context. There are risks for which the obligation to mitigate them is either fully transferred to the system context (when all their mitigations are classified as mitigate-by-context) or the system itself (when all their mitigations are classified as mitigate-by-system). On the other hand, there are risks for which this obligation is shared between the system context and the system. Therefore, in the classification made in this step, we aim to identify to which group of risks the mitigations are applicable. 4.5. Step 7: consolidate mitigations Only mitigations assigned to the system, i.e., mitigations classified as mitigate-by-system in Step 6, are considered in this step. Mitigate-by-context mitigations are not carried forward since context domains are responsible for implementing them. 4.6. Step 8: prioritise risks In the last step, risks are prioritised on the basis of their risk level (e.g., likelihood × impact) from expert estimation, as indicated by an input arrow feeding Step 8 in Fig. 6. The catalogues contain, for some entries, predefined ratings of likelihood and impact but they need to be customised (especially the impact rating) to the system under analysis. These risk levels affect the priority of outer arguments, and therefore, of security requirements to be satisfied (arrow from Step 8 to Steps 1–4, Fig. 6). When the residual risks from the inner argumentation are deemed to be acceptable, given limitations found in practice (e.g., limitations of development resources), the system is considered to have reached a level of satisfactory security. 5. The OpenRISA tool This section presents a domain-specific modelling language corresponding to the metamodel of outer arguments, inner arguments and risks assessment shown in Fig. 3. The language presented here extends the argumentation language presented by Yu et al. (2011). This section highlights the syntax and semantics of the integrated... argumentation language. It also illustrates the algorithms for checking rebuttals and mitigations in the inner and outer arguments. 5.1. Novelty of OpenRISA OpenRISA proposes an integrated modelling language for argumentation and risk assessment. For the outer arguments expressed in this new language, the OpenRISA tool supports (1) formal checking if the risks found by searching the catalogues are indeed rebuttals to the satisfaction argument; (2) formal checking if the mitigations to the risks are indeed capable of restoring the rebutted arguments; and (3) prioritising the risks based on a global threshold for selecting the relevant arguments. In terms of the steps in the RISA method, the OpenRISA tool can be used to (i) check the satisfiability of the outer arguments in Step 4 through formal reasoning, (ii) describe and visualise the informal inner arguments in Steps 5–7, and (iii) identify prioritised risks in Step 8. 5.2. Syntax of outer arguments Following Haley et al., we use propositional logic to write the outer arguments. In the syntax of OpenRISA, propositional variables are first defined and described before a formula is written. ``` argument: prop_example boolean S1, S2, S3 S1 "User enters ID and passcode" with S1 S2 "User ID and passcode match a pair of stored ID and passcode" with S2 S3 "User is a valid user" with S1 & S2 → S3 ``` In the above example, three propositional variables S1, S2 and S3 are defined and described using strings in lines 5–8. The keyword with indicates the formal assertion: S1 and S2 assert that they are both true, whilst S3 asserts that S1 ∧ S2 → S3. Logical connectives are written using the standard encoding: ¬ for negation, ∧ for conjunction, → for implication and ↔ for equivalence. The three statements together therefore say that S1, S2, S1 ∧ S2 → S3. OpenRISA is integrated with the tool Decreasoner (Mueller, 2011), and can check the correctness of the propositional formula such as this. 5.3. Visual and textual syntax of inner arguments Each inner argument has exactly one claim. A claim is a proposition whose truth value is to be established by the grounds and warrants supporting the claim. When there is no ground or warrant supporting the claim, we take the claim to be self-evident, or a ground. In other words, a ground is an argument with a claim with no supporting ground or warrant. As shown in Fig. 9, visually an argument is represented by a rectangle with three compartments. In the top compartment, the single claim of the argument is written in format of ID: Description round# where ID is the identifier of the claim, Description is a natural language description of the claim, and round# is a time stamp indicating the round at which the claim is introduced. In the middle compartment of the argument with the claim A1 is another argument F1 with three compartments. Since only the claim of the argument F1 is given, it is taken as a ground for the argument A1. Similarly, W1 is another ground that warrants that the claim A1 is true because of the ground F1. The use of the same notation for an argument, a ground, and a warrant allows the users to easily turn a ground into a claim. For instance, in Fig. 10, the ground F1 is no longer treated as a ground, rather a claim that needs to be supported further by a ground and a warrant. As a result F1 is now changed into the claim of a second argument A2 supported by F3 and W2. This nested style of syntax is appropriate for representing arguments because during the process of argumentation, grounds are typically challenged, thus leading to additional knowledge to be incorporated into the arguments. Since arguments can be nested, it is often useful to know at what stage during the argumentation process claims, grounds, and warrants are introduced: this is indicated by round#. In this example, it is clear that F3 and W2 (round 2) were introduced after A1, A2 and W1 (round 1). As well as nesting sub-arguments within arguments, arguments may be related to other arguments through rebuttal and mitigation relationships. Fig. 11 shows how the nested argument (A1 containing the sub-argument A2), is rebutted by another argument A3. The rebuttal relationship is represented by the dotted red line, indicating that the claim of the argument A2 cannot hold because some unauthorised people can also obtain valid IDs and passcodes. The effect of this rebuttal is not only that the argument A2 is false, but the argument A1 is also false as well: this is indicated by the solid pink arrow pointing at --- 6 The tool is available to be downloaded as an Eclipse rich client applications from http://sead1.open.ac.uk/risa. the boundary of A1. The solid pink link shows the scope of the rebuttal, and is particularly useful when there are several levels of nesting, and when there are several rounds of arguments, as it shows clearly the highest level argument that has been rebutted. Note that the choice of cue for different types of edges is a combination of dashed/solid arrows and colors as experience showed that color is not always the best cue for visualisation (Ernst et al., 2006). In the RISA method, a rebuttal to a security argument represents a risk, and may be addressed by a mitigation. Generally, a mitigation restores the claim that has been negated by a rebuttal. In the example in Fig. 11, the mitigation to A3 could be an argument that claims that legitimate users are instructed not to divulge their IDs and passcodes. Notice that this mitigation argument does not necessarily say that A3 is false: it simply says that the rebuttal can be tolerated by giving legitimate users certain responsibility. Since a system cannot prevent a user from divulging their ID and passcode, the rebuttal A3 remains valid. This is a kind of residual risk to the system. Diagrammatically, a mitigation relationship is represented by a solid green arrow. 5.4. Integrated syntax for arguments and risks The graphical syntax of arguments is represented as simple user-editable textual syntax, and the two syntaxes are supported by bi-directional synchronisation between the editors created using the Eclipse Modelling Framework (EMF) and the Graphical Modelling Framework (GMF). Furthermore, the textual syntax may also contain formal arguments in propositional logic, which are ignored by the synchronisation. The informal arguments shown in Fig. 11, and their formalisations are described by the following textual syntax. In the RISA method, the outer arguments are elicited through a systematic procedure from the premises of the causal phenomena in the context diagrams, whilst the inner arguments, in particular, the identification of rebuttals and mitigations are based on risk assessment. OpenRISA syntax now allows entries from catalogues, such as CAPEC and CWE, to be included in the informal argument. 5.5. Automated reasoning about arguments in integrated syntax The OpenRISA tool supports the checking of risks and mitigations from inner arguments based on propositional logic, and identification of risks. It checks the overall satisfaction of the claims of security requirements for the validity of a risk or a mitigation. When it is iteratively applied to all the rounds, it is possible to identify whether or not the logic rebuttal holds or not. The reification of the logical literals and the natural language expressions of the facts are, however, not checked by the logic reasoning. This reasoning is implemented by the two algorithms discussed below. When formalising the argumentation as propositions, the basic structure of an argument is transformed into the following formula: $$P_C \land P_W \rightarrow P_C$$ where $P_C$ is the conjunction of the propositions of the grounds, $P_W$ is the conjunction of the propositions of the warrants, and $P_C$ is the conjunction of propositions of the claim. Note the difference between $\rightarrow$ in (3) and $\Rightarrow$ in (2): the propositional logic is used here as one of many possible formal logics. For domain experts, propositional logic is conceptually simpler than higher-order logic and practically supported by reasoning tools. Instead of higher-order logic rules, we capture risks and mitigations as the iterative argumentation structure and use the associated algorithmic process to achieve the non-monotonic reasoning. The tool automatically extracts the identifiers of the claims as propositional literals and constructs a propositional formula accordingly in the conjunctive normal form. Syntactically, every informal claim may be accompanied by a propositional formula, such as the formula for S3. The tool first parses the syntax of the propositional Algorithm 1: Note that set notations are used extensively, where the round number of argument structure is element of \( N \) which is the set of natural numbers. **Input:** \( A^* = \{ a \mid a.r \in N \} \); a set of incremental arguments \( a \), annotated by a natural number \( a.r \in N \) to indicate the round of \( a \); **Output:** Rebuttals \( R^* = \{ (a_i, a_{i-1}, a_r) \} \) and Mitigations \( M^* = \{ (a_i, a_{i-1}, a_r) \} \) where \( 1 \leq i < i \leq \max_{a \in A^*} (a.r) \). 1. \( S := \{ \} \) // initially a set of an empty sequence 2. for \( i = 1 \). \( \max_{a \in A^*} (a.r) \) do 3. \( s' := \{ \} \) // each round of argumentation 4. for each \( a \in A^* \mid a.r = i \) do 5. \( s' := \text{concat}(s', a) \) 6. \( R, M := \text{CheckingArgumentRelationships}(s') \) // see Algorithm 2 7. \( R^*, M^* := R \cup R, M^* \cup M \) 8. end 9. \( S := S' \} 10. end 11. \( S := S' \} Algorithm 2: Note that set notation is used extensively, the grounds \( G \), warrants \( W \) and claim \( C \) components of the argument structure are accessed by \( \cdot \) operator. The symbol \( KB \) stands for the knowledge base which is a collection of propositions in the conjunctive normal form. **Input:** \( s = (a_1, \ldots, a_n) \) for \( n \in N \) is a sequence of incremental arguments in the form of \( a_n.G, a_n.W \vdash a_n.C \); **Output:** For the argument at the \( i \)th round with a claim \( KB_i \vdash a_i.C \) where \( 1 \leq i \leq n \): a) rebuttals: \( R = \{ (a_i, a_{i-1}, a_r) \mid a_i.C \wedge (KB_i \vdash \neg a_i.C) \}; \) b) mitigations: \( M = \{ (a_i, a_{i-1}, a_r) \mid (KB_{i-1} \vdash \neg a_r.C) \wedge (KB_i \vdash a_i.C) \} \). 1. \( KB_0 := \{ \} \) 2. for \( i = 1 \). \( n \) do 3. // update the knowledge base at the i th round: 4. \( a = s[i] \) 5. \( KB_i := KB_{i-1} \cup a.G \cup a.W \vdash \{ A_{\neg a} \} \) 6. // verify the satisfaction on the original claims 7. for \( i = 1 \). \( i \) \# 1 do 8. \( a^2 = s[i] \) 9. \( V_{i,F} := \text{eval}(KB_i \wedge a^2.C) \) //decreasoner 10. end 11. end 12. \( // \) output the verification results 13. for \( i = 1 \). \( n \) do 14. if \( V_{i,F} \) then 15. for \( i = i \# 1 \) do 16. if \( V_{i-1,F} \wedge \neg V_{i,F} \) then 17. \( R := R \cup \{ (a_i, a_{i-1}, a_r) \} \) 18. else 19. break 20. end 21. end 22. if \( V_{i,F} \) then 23. \( M := M \cup \{ (a_i, a_{i-1}, a_r) \} \) 24. else 25. break 26. end 27. end 28. end 29. end 30. end formula, and adapting the xtext unparsing API, it can weave both the implicit logic rule (3) with the user-defined rules into a syntactically correct propositional statement according to the BNF production rules of Event Calculus. Our algorithms traverse the round-based incremental argumentation structure to check all possible rebuttals and mitigations between adjacent rounds for effective ones: any rebuttal negates the argued claim, and any mitigation removes the negation of the previous rebuttals. When adding further rounds of arguments, the logic knowledge base is non-monotonic. Therefore, support for the initial claims has to be re-examined after each round of increment. The output of the algorithms presents the effective argumentation process as a directed acyclic graph where nodes are the incremental arguments and edges are the rebuttal or mitigation relationships between these arguments of adjacent rounds. Since arguments are typically constructed incrementally, in several rounds, there are usually many sequences of arguments, rebuttals and mitigations in an argument graph. Algorithm 1 enumerates all possible sequences in the argumentation process (Lines 3–15). For each sequence, Algorithm 2 is used to identify the incremental arguments that rebut and mitigate the previous arguments (Lines 8–11). Note that an incremental argument is checked only if its priority is between the lower and upper bounds of a user-defined range. In other words, a rebuttal with the risk below a certain level or a mitigation with the cost above a certain threshold will not be used for reasoning. On each sequence of incremental arguments \( s' \), rebuttals and mitigations may only appear alternately. Algorithm 2 takes the input from the sequence \( s' \) and generates the output as rebuttals \( R \) and mitigations \( M \). Specifically, Line 6 updates the knowledge bases at round \( r \) by taking into account the newly introduced, removed and modified facts; Line 10 invokes an external reasoning tool (Decreasoner) (Mueller, 2011) to turn the encoded propositional logic formula into a satisfiability formula for a solver. Lines 14–29 convert the satisfiability evaluation results into risks and mitigations. Let \( n = \max_{a \in A^*} a.r \) be the number of rounds in the argumentation process, the complexity of Algorithm 1 is \( O(n) \) after factoring out all possible arguments that need checking where the input size \( m = \left| S \right| = \Pi_{i=1-n} \left| \{ a \mid a.r = i \} \right| \) is the total number of possible sequences of arguments. For example, if there is one original claim to be argued, suppose there are 2 incremental arguments at the first round, 3 incremental arguments at the second round, and 2 incremental arguments at the third round, then the number of possible argumentation sequences is \( 1 \times 2 \times 3 \times 2 = 12 \). Algorithm 2 has a complexity of \( O(n^2) \) set operations for incremental updates of the knowledge bases, plus \( O(n^2) \) inquiries of the knowledge bases to verify all the claims. According to our implementations using Decreasoner, the automated reasoning is quick especially when the arguments are introduced incrementally. Although the SAT solver is called by \( O(n^2) \) times, for a more complex argument diagram of 64 nodes and 5 rounds, it produces the reasoning results in less than 10 s. Even though satisfiability problem computation is NP-complete in terms of computational complexity for the worse cases, with the practical purposes in our case study it is shown quite effective. Of course, if one only would check the argument for one particular claim rather than all possible claims, the complexity can reduce to $O(n)$ by converting the loop in Lines 8–11 into a single iteration. 6. The PIN entry device (PED) example PIN Entry Device (PED) is a type of device widely-deployed and used by consumers to pay for goods with debit or credit smartcards at the Points-Of-Sale (POS). When using the device, cardholders typically insert their cards, issued by a financial institution, into a card-reader interface of the PED, enter the PIN using the PED’s keypad, and confirm the transaction value via a display on the PED itself. Then smartcard-based systems are expected to authenticate cardholders via the PIN and verify the card details against a public-key certificate before transactions can be completed successfully. These certificates are usually stored on the chip but they can also be stored on the magnetic strip for compatibility with card-readers that have not adopted this technology. Most PEDs used in Europe implement the EMV (EuroPay, MasterCard and Visa) protocol in the process of authentication and authorisation of payment transactions. This protocol drives the communication at the PED-card interface and the PED-bank interface. The protocol in principle allows only encrypted transmission of the PIN across these interfaces when the PED, card and bank support asymmetric cryptography. However, many card issuers in Europe make the conscious decision to adopt a low-cost EMV option in their smartcards which researchers (Drimer et al., 2008) have found to be vulnerable, tool. Note that the example is mainly used for the illustration purpose of the overall tool-supported approach rather than for the validation of the approach itself. 6.2. Steps 1–3 The PED documentation (Card Payment Group, 2003; Drimer et al., 2008; Mastercard, 2004) allowed us to identify the PED overall functional requirement (FR) – step 1, security goal (SG) – step 2, and two security requirements (SR1) and (SR2) – step 3: (FR1) Allow consumers to pay at Points-Of-Sale with PIN (SG) Protect the PIN (SR1) PIN entered by consumers shall remain confidential during payment transactions at Points-Of-Sale (SR2) PIN entered by consumers shall remain accurate during payment transactions at Points-Of-Sale Also part of step 3, the system context $W$ in the entailment (1) is elaborated. The functional requirement of the PED helps us to delimit the context; from (FR1), we identify five domains: consumer, card, merchant, terminal and bank. Fig. 12 shows the context of the PED system and its security requirements. The notation is unusual in two ways: (i) it treats the PED system as a machine with its own components, and (ii) it represents shared phenomena by directed arrows. In adopting (ii) we show graphically that a shared phenomenon is controlled by one domain and observed by another (Jackson, 2001). Therefore, the domain $B$ with the arrow head observes the phenomenon $p$ controlled by the domain $A$ without the arrow head. Textually, this is represented by $A_1 p$. Notice that the diagram shows the shared phenomena related not only to PIN, but also to the card details and the transaction value, which are relevant to the PED payment transactions. Note that the assignment of the formal propositional formula to risks and mitigations still need to be done manually because it is easy for automation to introduce errors. Therefore, these steps cannot be fully automated. On the other hand, an explicit documentation of the formula and automated reasoning tool support make it possible to investigate the intended logic meanings behind the informal arguments to diagnose unexpected outcomes. 6.3. Step 4: construct outer arguments According to the entailment (2) and derived from the PED overall behaviour from the moment the consumer enters a PIN (consumer!PIN) until a payment transaction is confirmed by the bank (bank!confirmation-transaction), we have the following outer argument: \[ \text{consumer!PIN} \rightarrow \text{bank!confirmation-transaction} \vdash (\text{SR1} \land \text{SR2}) \] Premises P1–P6 are warrants that argument consumer!PIN $\vdash$ bank!confirmation-transaction is true and therefore SR1 and SR2 are satisfiable. Warrants P1–P6, derived from Fig. 12, are expressed in the OpenRISA tool notation as follows. Note that for the illustration purposes, we show the premises by proposition labels with an index starting from 1 instead of 0, while in the implementation it is easier to start these as an array indexed from 0. Although there can be more than one sequence of events when explaining the behaviour of the context diagram, a rebuttal only requires one such instance of such to demonstrate that the claim does not hold. were identified for the PED example, after eliminating, according to the domain expertise, the false positives returned by the keywords search. These risks invalidate the satisfaction of this security argument A1 by rebutting its warrants P1–P6 in practice. For instance, risk R1 rebuts argument A1 (premise P1) and is supported by CAPEC-455 and CAPEC-89, while risk R8 rebuts argument A1 (premises P2 and P5) and is supported by CWE-311, CWE-325 and CAPEC-439. Risks are expressed in the OpenRISA tool notation as show in the listing below; note that this listing complements the one defining A1. 6.5. Steps 6 and 7: identify, classify & consolidate mitigations The CAPEC and CWE entries describe possible mitigations. For instance, from the catalogues we derived the following mitigations to rebut risks R1 and R8, restoring argument A1, and expressed in the OpenRISA tool notation; note that this listing complements the one defining risks (round 2). 6.6. Step 8: prioritise risks The catalogues provide off-the-shelf support for the estimation of risk level based on expert ratings for likelihood and impact in terms of confidentiality, integrity and availability. However, these ratings depend on expert judgement. The RISA method can be used in conjunction with any technique that estimates risk level – quantitative, semi-quantitative or even qualitative (where risk level is assigned to high, medium, or low). 7. Discussions We organise our discussions around two areas: the catalogues search from experience gained with the PED example and short-term future work. 7.1. Systematic search of catalogues The RISA method takes advantage of an automated keyword-based search for the CAPEC and CWE security catalogues, using predefined fields converted from XML to text. The systematic and repeatable search process incorporated to the RISA method allowed us to identify 35 risks from the analysis of 207 attack patterns and weaknesses. While the ad hoc search applied to the same example, as reported in Franqueira et al. (2011), allowed us to identify only 9 risks, based on the analysis of a small number of catalogue entries. Despite this significant improvement in catalogues There are infinite possibilities of keywords which can be used to search the catalogues related to a same premise. We partially addressed this issue by applying the Porter Stemming algorithm (Porter, 1980) to remove common English suffixes. Therefore, keywords provided are automatically reduced to stems (when applicable), increasing the coverage of the catalogues while decreasing the number of keywords. Nevertheless, translating an information need into a searchable query is not necessarily straightforward (Borgman, 1996). We reflect on our experiences using RISA’s catalogue search in terms of precision and recall illustrated by examples. Keyword encrypt uncovers weakness CWE-807 (Reliance on Untrusted Inputs in a Security Decision) used as reference for risk R1.18. However, weakness CWE-20 (Improper Input Validation) would be even more appropriate as a reference for this risk. CWE-20 was not found because its description summary and extended description contained none of the keywords searched. This issue may affect the identification of mitigations and the prioritisation of risks. RISA’s search using keyword log returns catalogue entries related to log file(s), audit log(s) and logging, which represent true-positives and should be analysed. Yet, it also returns false-positives related to, e.g., log in or logged. This raises a question about whether to increase the precision of keywords (e.g., using compound keywords), or coping with an increased number of search results. We used the off-the-shelf Lucene search engine to compute the relevance of queries (keywords selected from outer argument) and documents (public catalogues), therefore it has room for missing matches, in addition to the risk of generalisation/specialisation mismatches. There are other factors which influence the choice of keywords, such as awareness of security jargon, familiarity with the catalogues themselves, and grasp of terms related to the system domain, and technologies involved. Apart from Porter’s stems, we proposed the use of guided-words, inspired by HAZOP (Ericson, 2005; Wintner et al., 2001), as another way to make the choice of keywords more systematic. However, all in all, the choice of keywords affecting search results is an intrinsic problem in several other domains as well, such as in Web search. But, since end-users practice and training may minimise this issue in online search (Kim, 2001), we expect the same to happen with requirements engineers. ### 7.2. Future work In the following, we identify two areas of further research as well as a list of short-term improvements to RISA. #### 7.2.1. Feedback from the risk assessment steps to the requirements satisfaction steps Although OpenRISA provides mechanisms for maintaining traceability between arguments and requirement models, they can be enhanced in a number of ways. First, risks assessment can identify new problem world domains, changes in behaviour and properties of the problem world domains, and even new security requirements. Such new knowledge is to be integrated with the requirement models. This is currently done manually, but tool support for this task will be helpful. Second, since there can be a number of rebuttals and mitigations to an argument, the size of argument models often increase quickly after a few rounds of argumentation. It is difficult to visualise large argument models, and heuristics for partitioning arguments are necessary in OpenRISA. Third, the keywords general or specific to security domain need to be collected to guide the search of the risk for the argumentation-based elicitation. We hope to extend the current ad hoc approach with HAZOP-like guided-words (as mentioned in Section 6.4) based on domain ontology that has been introduced by the effort of the entire research community, and based on terms that often appear in the security catalogues. Finally, we aim to integrate other requirements languages in order to support their integration with risk assessment frameworks. For instance, we have developed a tool supporting the Problem Frames approach, called OpenPF (Yun et al., 2009). One way to integrate the two tools will be by means of graphical traceability. The integrated tool can allow each risk and mitigation to be explicitly linked with a problem world domain of a problem diagram drawn in OpenPF. Similarly, each problem world domain in the problem diagram can link to an argument or any part of it, maintaining the traceability between problem diagrams and argument diagrams. #### 7.2.2. Risk-based prioritisation of arguments Currently, RISA prioritises arguments based on risk level depending on experts’ estimation of risks. Security catalogues supporting RISA provide some pre-defined, generic, ratings of likelihood and impact on confidentiality, integrity and availability but they still need customisation. Prioritisation via risk level fits into a risk-averse perspective which favours risk over, for instance, the cost of implementing mitigations. However, prioritisation could follow a risk-taking perspective where the cost of mitigations is more important than risks, or a perspective in-between which gives importance to both risks and mitigations taking cost/benefit into consideration (e.g., Franqueira et al., 2010). Although the current version of the OpenRISA tool already supports priority value for mitigations, more development is necessary to fully operationalise these prioritisation possibilities. Another approach to overcome this risk-only prioritisation would be to adopt a value-based argumentation approach, where a value is associated with an argument and affects the conclusion about a claim. This accounts for the fact that stakeholders have different priorities, which might conflict. Adding value to arguments is a way to make these priorities explicit and, therefore, expose them to scrutiny and criticism (Graydon and Knight, 2008). For instance, Burgemeeestre et al. (2010) consider, in their example, arguments flagged with the values safety and auditability. Baroni et al. (2009) consider safety and cost, while Bench-Capon (2003) discusses a moral debate where an ordering on values allows distinct audiences to set different preferences, e.g., life has priority over property or the other way round. Value-based argumentation applied to RISA would be helpful, for example, to allow different prioritisations of arguments based on preferences among different values associated with mitigations, such as cost of implementing them and their operational cost, or even different values associated with risks, such as likelihood, and impact. Such direction requires changes in the OpenRISA tool to support more than one priority value for arguments. 7.2.3. Further enhancement for adoption Presenting all increments of arguments in one diagram may not be the most scalable solution when the diagram is large, even though textual input can support very large structures. The diagram part of the OpenRISA tool may improve with learning special-purpose visualisations such as tree-based view of arguments in (Cyra and Górski, 2011). Apart from visualisations, there are other ways to represent arguments graphically. For example, Prakken et al. (2013) have proposed to represent ASPIC+ arguments as a participatory game. Ionita et al. (2014) have proposed a stripped down version of ASPIC+, which has been received positively by practitioners. The OpenRISA representation may have a similar impact by presenting a simplified view to the practitioners. Furthermore, the reasoning behind the formal argumentation needs to be understandable for the users. In Yu et al. (2014), we have shown that a deductive theorem proof can provide provenance on how to deduce the conclusion of risks and mitigations from the grounds and warrants. A future work is to provide more traceability in the tool for such explanations on the fly. Although the argumentation side of OpenRISA has been evaluated with an industry evaluator at DeepBlue (Yu et al., 2011), the overall approach has not yet been applied fully to the wider industry. 8. Conclusion Argumentation approaches organise the evidence for or against the claims of software security. They aim to strike a balance between perfect security and practical limitations. This paper has proposed a tool, OpenRISA, which supports the use of argumentation and risk assessment together to reason about the satisfaction of security requirements. OpenRISA has three main features. First, it supports representing both argumentation and security risk assessment in an integrated modelling language. Second, it provides an automated search for publicly available catalogues of common attacks and weaknesses, namely CWE and CAPEC, as evolving sources for security risk assessment. Third, it checks the soundness of the formalised arguments challenged by the security risk assessment. The tool has been demonstrated through an example of PIN Entry Device System, part of which has been discussed in the paper. Acknowledgments The work is supported in part by the ERC Advanced Grant 291652 (Adaptive Security And Privacy, http://asap-project.eu), the SFI grant 03/CE2/I303_1, and Sentinels (http://www.sentinels.nl). We would like to thank our colleague Paul Piwek for feedback on earlier draft of the paper. References Yijun Yu graduated from the Department of Computer Science at Fudan University (B.Sc. 1992, M.Sc. 1995, Ph.D. 1998). He was a postdoctoral research fellow at the Department of Electrical Engineering in Ghent University (1999–2002), a lecturer and research associate at the Knowledge Management lab of the Department of Computer Science in University of Toronto (2003–2006). Since October 2006, he has become a Senior Lecturer at the Department of Computing and Communications in The Open University, UK. He is a member of the IEEE Computer Society and the British Computer Society. He is interested in engineering automated software tools to solve fundamental and practical problems in the research areas of quality requirements in general, and security and privacy in particular. For more information about him see http://mcs.open.ac.uk/yy66. Virginia N. L. Franqueira is currently a senior lecturer at the University of Derby, UK. Prior to that, she held a lectureship position at the University of Central Lancashire (UK), a postdoc research position at the University of Twente (NL), and worked as an information security consultant (UK). She received her Ph.D. in Computer Science from the University of Twente (NL) in 2009, and her M.Sc. from the Federal University of Espirito Santo (BR). She is a member of the IEEE Computer Society and the British Computer Society. Her topics of research interest include security engineering, risk management and estimation, attack modelling and external insider threat. For more information about her see http://www.dery.ac.uk/staff/virginia-franqueira/. Thein Than Tun received Ph.D. in software engineering from London Metropolitan University in 2005. Since then, he has held research positions at the Open University (UK), University of Namur (Belgium), and University College London (UK). He is interested in Requirements Engineering approaches, and their application in the development of feature-rich, secure and privacy-sensitive software systems. His research is related to priority requirements, requirements evolution, argumentation for security, feature interaction, failures of dependable systems and feature modelling. Dr. Tun is a fellow of the British Computer Society. For more information about him see http://mcs.open.ac.uk/ttt23. Roel Wieringa is Chair of Information Systems at the University of Twente, the Netherlands. His research interests include requirements engineering, IT security risk assessment, and design science research methodology for software. He has written three books, Requirements Engineering: Frameworks for Understanding (Wiley, 1996), Design Methods for Reactive Systems: Yourdon, Statemate and the UML (Morgan Kaufmann, 2003), and Design Science Methodology for Information Systems and Software Engineering (Springer, 2014). He currently heads the research group of Services, Cybersecurity, and safety at the UT. Find more information at http://www homepage.ewi.utwente.nl/~roelw/. Bashar Nuseibeh is Chair of Computing at The Open University (Director of Research, 2002–2008). Previously, he was a Professor of Software Engineering and Chief Scientist at Lero the Irish Software Engineering Research Centre (2009–2012). He was also an academic member of staff (Reader) in the Department of Computing at Imperial College London and Head of its Software Engineering Laboratory (1990–2001), then continue as a Visiting Professor, and maintain strong research links with the Distributed Software Engineering Group. He is also a Visiting Professor at the National Institute of Informatics, Japan. He is currently holder of a Royal Society-Wolfson Merit Award (2013–2018) and a European Research Council (ERC) Advanced Grant on Adaptive Security and Privacy (2012–2017), and serves as Editor-in-Chief of the IEEE Transaction on Software Engineering (2010–2014). Previously, he held a Senior Research Fellowship from The Royal Academy of Engineering and The Leverhulme Trust (2005–2007) and served as Editor-in-Chief of the Automated Software Engineering Journal (1995–2008). For more information about him see http://mcs.open.ac.uk/ban25.
{"Source-Url": "http://eprints.eemcs.utwente.nl/25102/01/Yu_et_al_-_automated_analysis_of_security_requirements_through_risk-based_argumentation.pdf", "len_cl100k_base": 14213, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 53061, "total-output-tokens": 15817, "length": "2e13", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.0006861686706542969, "__label__crime_law": 0.000804901123046875, "__label__education_jobs": 0.0027980804443359375, "__label__entertainment": 9.489059448242188e-05, "__label__fashion_beauty": 0.00021469593048095703, "__label__finance_business": 0.0007724761962890625, "__label__food_dining": 0.00029397010803222656, "__label__games": 0.0007891654968261719, "__label__hardware": 0.0007848739624023438, "__label__health": 0.0004496574401855469, "__label__history": 0.0003070831298828125, "__label__home_hobbies": 0.00012350082397460938, "__label__industrial": 0.0004189014434814453, "__label__literature": 0.00041031837463378906, "__label__politics": 0.0003407001495361328, "__label__religion": 0.0003612041473388672, "__label__science_tech": 0.07513427734375, "__label__social_life": 0.00013780593872070312, "__label__software": 0.0270538330078125, "__label__software_dev": 0.88671875, "__label__sports_fitness": 0.0001995563507080078, "__label__transportation": 0.00038909912109375, "__label__travel": 0.00015676021575927734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67141, 0.03992]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67141, 0.45773]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67141, 0.89957]], "google_gemma-3-12b-it_contains_pii": [[0, 4078, false], [4078, 10462, null], [10462, 17343, null], [17343, 19962, null], [19962, 24662, null], [24662, 27546, null], [27546, 30242, null], [30242, 34920, null], [34920, 38946, null], [38946, 44881, null], [44881, 49903, null], [49903, 52096, null], [52096, 58743, null], [58743, 62309, null], [62309, 67141, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4078, true], [4078, 10462, null], [10462, 17343, null], [17343, 19962, null], [19962, 24662, null], [24662, 27546, null], [27546, 30242, null], [30242, 34920, null], [34920, 38946, null], [38946, 44881, null], [44881, 49903, null], [49903, 52096, null], [52096, 58743, null], [58743, 62309, null], [62309, 67141, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67141, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67141, null]], "pdf_page_numbers": [[0, 4078, 1], [4078, 10462, 2], [10462, 17343, 3], [17343, 19962, 4], [19962, 24662, 5], [24662, 27546, 6], [27546, 30242, 7], [30242, 34920, 8], [34920, 38946, 9], [38946, 44881, 10], [44881, 49903, 11], [49903, 52096, 12], [52096, 58743, 13], [58743, 62309, 14], [62309, 67141, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67141, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
cfcc7e1ec7bc45fd92f5053bd661205d476f86a0
Between a rock and a hard place: Management and implementation teams’ expectations of project managers in an agile information systems delivery environment Introduction Information system (IS) development projects have a reputation for failure in terms of overrun in budget, timeliness and not meeting users’ expectations (Karleskey & Voord 2008; Savolainen, Ahonen & Richardson 2012; Yeo 2002). Evidence for this reputation is that only 29% of worldwide IS projects achieved project management (PM) success (The Standish Group International 2015). This failure rate is high when compared with other high-tech projects and is the reason for concern given that IS is increasingly seen as being of critical strategic and operational importance in organisations (Sauer & Reich 2009). Furthermore, within the knowledge economy, software is seen as a source of knowledge and IS development as a source of knowledge creation (Bailin 1997; Shongwe 2015), and creating knowledge affords organisations the opportunity to gain and sustain competitive advantages (Mitchell & Boyle 2010). In an attempt to address the failure of traditional approaches to IS delivery, organisations are turning to agile methodologies (Dybå & Dingsøyr 2008; Lindvall et al. 2004). Agile software development differs from the traditional waterfall approach in that, in a waterfall approach, a formal, sequential process of planning, analysing, designing, implementing and maintaining is followed. Agile, on the contrary, is characterised by fast and flexible results based on iterative delivery, frequent feedback loops and constant involvement of the customer (Cohn 2004; Rao, Naidu & Chakka 2011; Stettina & Horz 2015). The wide-spread adoption of agile implementation methodologies is attributed to their ability to respond to fast changing business requirements, market conditions and technology innovation (Augustine, Payne, Sencindiver & Woodcock 2005; Stavrú et al. 2014). From a PM perspective, the move to agile implementation has introduced a number of challenges. The project manager can no longer only be concerned with planning, organising and controlling, but instead has to learn to facilitate and coach to encourage collaboration between team members in line with the agile way (Highsmith 2003; Nerur, Mahapatra & Mangalaraj 2005). They also have to play an active role in project knowledge management, which contributes to project success (Srikantaiah, Koenig & Hawamdeh 2010:v). A further complication is that agile software development encourages autonomous, self-organising teams who are meant to share PM tasks and responsibilities such as estimation, planning and progress tracking. This new focus encroaches on the project manager’s territory and raises questions about the project manager’s role (Hoda & Murugesan 2016). To complicate matters even more, many, especially large organisations, struggle to make the transition from traditional to agile IS implementation methodologies (Dybå & Dingsøyr 2008; Nerur et al. 2005; Sidky, Arthur & Bohner 2007). In fact, it is more likely that large organisations employ both traditional and agile IS implementation practices in what is termed an ambidextrous approach (Vinekar, Slinkman & Nerur 2006), this duality presenting additional complex challenges to the PM role. Given these team-related and organisational challenges faced by project managers, the question arises: how should project managers adapt to fit into an agile implementation environment within large corporates? This research set out to explore this question by obtaining the perspectives of the following two important project stakeholders: the management team and the implementation team. How do they view the role of a project manager in an agile environment and what do they require from such a role to more successfully complete IS implementation projects? If project managers who operate in implementation environments that are moving into an agile space are aware of the traditional versus agile needs of their key stakeholders, they could adapt their approach to strike a balance between the old and new way of working. The rest of this article is structured as follows: Firstly, theoretical perspectives on IS projects in agile environments are presented. Next the case study research design used in this research is explained, and finally, the findings, discussion and key guidelines for the adapted role of PM are provided. Information system implementation and project management challenges The global business landscape has changed dramatically in the last few decades. Access to data, disruptive technological advances and the speed of innovation are some of the key drivers fuelling this revolution (Barkema, Baum & Mannix 2002). IS development is a crucial part of delivering technology innovation (Schwaber 2004), and as a result, business is both more aware and more critical of the success of IS projects. Business sees IS as being of strategic and operational importance – they want to see a return on their investment in IS and they have become more mature in their understanding of the nature of IS and IS projects (Sauer & Reich 2009). Despite many efforts for IS implementation to meet customers’ value needs in recent years, many software projects still fail to deliver value. They use more resources than planned, deliver less functionality at lower quality than expected and take longer to complete than anticipated (Barros, Werner & Travassos 2004). Some of the reasons offered for these failures include badly defined requirements, unrealistic expectations from business, poor reporting on project status and poor management of risks (Charette 2005). To manage risks, one has to manage knowledge (Neef 2005) and project knowledge is considered as one of the most powerful tools in managing risk (Cooper 2003; Srikantaiah et al. 2010). PM can play a significant role in knowledge management and, therefore, risk management because of the distributed interaction it has with many layers in the organisation (Schiel 2010). Most IS professionals believe that using IS project methodologies will improve the PM success rate; however, project managers face a number of challenges that limit their success: unrealistic project deadlines, working on multiple projects simultaneously, ineffective use of PM software and lack of knowledge of PM methodologies (Terlizzi, De Souza Meirelles & De Moraes 2016). The nature of IS projects has also changed in recent years with an increase in technical complexity, rate of technology change, importance of security, business change involved in projects, prevalence of virtual teaming, organisational instability and interdependence with other organisations. All of these factors contribute to the fact that information system project management has become increasingly challenging (Sauer & Reich 2009:184). The move from traditional to agile Agility in organisations emerged from multiple domains including logistics and manufacturing (Stettina & Horz 2015) and found its way into software development at the end of the 1980s (Nagel & Dove 1991). One of the drivers towards agile methodology involves moving away from the extensive use of planning, codified processes that enforce standardisation, rigorous software reuse, heavy documentation and big upfront design, which traditional software development processes demand (Ariko & Ososifan 2010; Nerur et al. 2005). In traditional waterfall methods, a sequential process is followed whereby projects force users to describe their needs accurately upfront, to capture as much information as possible, and only to deliver the requested features at the end of the process (Hong et al. 2011; Stoica, Mircea & Ghilic-Micu 2013). This has created a challenge for most organisations, mainly because of the false impression that proper planning and collecting detailed user requirements assist project teams in learning everything that they need to know about user requirements (Goodpasture 2015). The reality is that because of rapidly changing technology, market and social conditions, most real-world development efforts are conducted in more volatile environments (Augustine et al. 2005; Nerur et al. 2005). As a result, system requirements will change fast, often at ‘Internet speed’ (Baskerville et al. 2003). Agile development leverages the concept of lean manufacturing that aims at deferring a decision until the reasonable moment, thereby assisting an organisation not to waste time on tasks until the odds of actually doing them are high (Schiel 2010). The agile approach follows four basic principles: individuals and interaction over process and tools, working software over comprehensive documentation, customer collaboration over contract negotiation and responding to change over following a plan (Fowler & Highsmith 2001). Agile methodology allows for the prioritisation of functionality, as development teams deliver product features sooner, through iterations. Agile also entails frequent feedback loops and iterative reviews (Stettina & Horz 2015). By using agile methods, customers are constantly involved in the development process by providing input into what should form part of the feature, which gives customers the assurance that the outcomes will be as close as possible to their requirements (Cooke 2014). The role of management in an agile environment is that of facilitating rather than controlling and developers often work collaboratively or in pairs, whereas in the traditional approach they are required to work more individually within teams (Hoda, Noble & Marshall 2008). Although there is increased awareness of and interest in agile methods, it appears to be more difficult to implement in larger projects (Dybå & Dingsøyr 2008). The speed of change and close customer involvement required by agile is particularly challenging for larger organisations with well-established processes and structures (Stettina & Horz 2015). There currently seems to be no structured approach for the adoption of agile methodology, resulting in organisations asking different questions on the way to proceed when adopting agile practices (Sidky et al. 2007). Agile systems development requires changes to organisational culture (Vinekar et al. 2006), and this may take several years to achieve (Adler & Shenhar 1990), implying that there are varying levels of agile maturity within an organisation. A number of agile maturity frameworks exist, an example being the agile maturity model (AMM) by Patel and Ramachandran (2009). This model lists five stages of agile maturity based on the level of agile process improvement: initial, explored, defined, improved and sustained. Many large organisations claim to follow an agile IS implementation approach, but in reality because of current low levels of agile maturity within large organisations, many employ both traditional and agile implementation approaches (Vinekar et al. 2006). This presents unique challenges to the PM discipline and the role of the project manager in IS implementation projects within large organisations. The challenges of agile project management In response to the shift towards agile IS implementation methodologies, the PM discipline has started to reinvent itself in the form of the emerging agile project management (APM) discipline (Lee & Yong 2010; Persson, Mathiassen & Aaen 2012). APM is a conceptual PM framework for undertaking software development projects in which the emphasis is moved from planning to execution (Chin 2004), and encompasses the study of which methods, tools and techniques to employ to improve the performance of the project by promoting agility (Conforto & Amaral 2016). Taking a brief look at APM may assist in a better understanding of the challenges that face traditional PM in agile environments. According to Augustine et al. (2005), APM lets software project managers and employees adapt to changing circumstances rather than trying to impose rigid formal controls, as in traditional linear development methods and this in itself poses a challenge to traditional PM approaches. Although PM remains an important and necessary part of any software development process in terms of managing the teams, customer relationships, cost reduction, risk management, maintaining project time line and budget, the manner in which it is done in an agile environment has changed (Hoda et al. 2008). Project managers are left in the lurch, since many of the commonly known PM practices and tools are geared towards large and relatively slow-moving projects (Chin 2004). One such example is the emphasis on documentation in the traditional environment. Because of the fast changing context of the agile space, if a project manager attempts to rigorously document variations on the originally agreed-upon plan, their time will be consumed with tracking, analysing and documenting ever more complex variations, and they risk demoting their position to that of an administrative role (Chin 2004). Roles and responsibilities have also changed. Agile introduced a new set of roles such as product owner and scrum master, which share some of the traditional responsibilities of the project manager, further blurring the line of PM (Hoda et al. 2008). To manage changes in requirements, an agile environment needs to allow for innovation and creativity, and by applying traditionally heavy PM techniques, the project manager risks stifling innovation. A balance is, therefore, required between too much process and too little process (Chin 2004). The agile manager is, however, responsible for establishing clear roles and responsibilities to ensure proper team alignment and accountability. They also need to be vigilant in identifying practices not being followed, understand the causes of the impediments and endeavour to remove any obstacles (Augustine et al. 2005). Chin (2004) recommends two strategies that project managers could employ to adapt to agile environments: - take more of an outward-facing perspective to facilitate the integration of the project and the business focus energy on delivering results that solve business needs rather than staying within present project boundaries (Chin 2004). Augustine et al. (2005) echo this sentiment by stating that project managers should become visionary leaders rather than uninspired taskmasters and embrace the notion of self-directed teams with ‘light touch’ leadership. Agile project managers must aim to steer and guide the various entities involved in an agile project, and encourage continuous learning and adaptation by acting as facilitators (Augustine et al. 2005; Nerur et al. 2005). Nerur et al. (2005), however, also warn that shifting from authoritative manager to facilitator may not be easy for people who thrive on authority. Having surveyed the literature, the researchers concluded that prior research into the changing role of the project manager in an agile environment seems to have been limited mostly to theoretical concepts leaving a clear gap for empirical research to be conducted. To provide a novel angle on what is expected from a project manager in an agile environment, perspectives were sought from the people who interact on a daily basis with project managers, namely the management team and the implementation team. Research design This research followed a qualitative, case study approach. This design was chosen to obtain descriptions of the phenomena (perceptions held by management and implementation teams) within the relevant context as described by the study participants, based on their experiences (Darke, Shanks & Broadbent 1998). Interpersonal expectations, as explored in this research, are highly complex and the qualitative case study approach allowed the researcher to deal with this complexity by gaining insights into behavioural conditions from the participants’ perspectives (Zainal 2007). Sampling and research setting Purposive sampling was used (Wilmut 2005) to identify 13 participants working within the IS department of a business unit within a large insurance company in South Africa. Five of the participants belonged to the management team and the remaining eight were part of the IS implementation team. The management team roles consisted of the IT executive, senior IT consultant, project manager head, software methodology head and a program manager. The implementation team roles included project managers, developers, testers, architects and analysts. Members of these teams have been part of both successful and failed agile projects within the organisation. Participants were included based on their advanced level of experience and knowledge about IS project implementations, with all of them having been involved in IS for at least 10 years. Participants were approached directly by the researcher and all participants signed informed consent forms, where the nature of the research and their rights as participants were explained. The ethics committee of the University of Stellenbosch approved this research. Data collection and analysis Semi-structured interviews were conducted with all participants. A basic interview guide was used to elicit deep reflection about the experiences of participants during IS implementation projects (Barriball & White 1994; Merriam & Tisdell 2015). This approach allowed participants to share their understanding of the challenges they faced and specifically the role that project managers played. Interviews were digitally recorded, transcribed and proofread by the researcher. Data were analysed using an inductive qualitative content analysis approach (Hsieh & Shannon 2005). This analysis method was used because of its ability to extract the meaning of data through a process of coding and to describe theoretical concepts by means of an inductive approach (Cho & Lee 2014). The qualitative data analysis software package ATLAS.ti was used to identify initial categories through line-by-line coding (open coding), followed by category reduction through axial coding (Corbin & Strauss 1990). Initial coding yielded 380 codes, which after axial coding and memo writing (Charmaz 2014) was reduced to a set of 10 main challenges faced during agile implementation projects. This article reports on one of these identified challenges namely what is expected of project managers. Findings and discussion The expectations that the management team (hereafter referred to as ‘M-1’, ‘M-2’, etc.) and the implementation team (hereafter referred to as ‘I-1’, ‘I-2’ etc.) had of the PM role in an agile environment within a large corporate consisted of two main themes namely ‘Performing a governance role’ and ‘Interacting with the implementation team’. The latter theme (‘Interacting with the implementation team’) had two sub-themes: ‘Exerting control’ and ‘Serving as a coach and facilitator’. The expectations of both teams with regard to these themes are illustrated in Figure 1. Although project governance expectations are shared, expectations on how to interact with the implementation team differ. The rest of this section discusses these findings. Performing a project governance role Traditionally, project governance is a key part of the PM role within IS projects and is used to achieve more predictable rates of PM success (Terlizzi et al. 2016). In an agile environment, where there is supposedly less focus on process and tools and more on individuals and interactions (Fowler & Highsmith 2001), one would assume that expectations from management and implementation teams would have changed. The findings of this research, however, reveal that both the management and implementation teams still value and expect project managers to fulfil a governance role, specifically relating to project delivery, risk management, reporting and budgeting. In terms of project delivery, both the management and implementation teams agreed that driving a project plan is one of the key roles that remain important for a project manager in an agile environment as illustrated in these comments: ‘A project manager has to focus on chasing the plan ensuring that people stick to what they promised, according to the plan.’ [I-7; male; project manager] ‘A project manager’s role is to report progress and risks to the executives.’ [M-1; male; IT executive] ‘In our case I don’t think you can do without the project manager because there’s detail and high level issues that must be dealt with on a daily or weekly basis that impacts your timelines, so you can’t do without the project manager role.’ [M-3; male; head of PMO] The traditional PM governance role was seen as critical in an agile environment since there is the perception that pure agile, as opposed to the traditional waterfall approach, does not provide the necessary tools ‘to ensure accountability and responsibility on the delivery team members as individuals’ (M-1). This finding is contrary to what agile software development advocates. Sprint teams are expected to be autonomous and self-organising and keep themselves accountable by sharing some of the typical PM tasks such as planning and estimation (Hoda & Murugesan 2016). The fact that the perception exists that individuals are not held accountable may point to the fact that there is a low level of agile maturity in this specific environment possibly, because there are no formal agile process improvement initiatives in place (Patel & Ramachandran 2009). According to Goodpasture (2015), self-organisation is often as the result of highly motivated, highly skilled and experienced team members and evolves over time. From a management perspective, this need for control can perhaps be understood in the context of the high rate of IS project implementation failure (Savolainen et al. 2012). There seems to be an expectation that a project manager should be able to take ultimate responsibility for ensuring that tasks are completed. This is argued by I-6, who cannot imagine a team without a project manager, who takes full responsibility, ensuring that due process is followed. A project manager is someone that takes the responsibility for ‘pushing the people when things are falling behind’ (I-6). The expectation is more than merely meeting a deadline. I-4’s view is that the organisation expects the project manager to be able to connect the original business case with project delivery and also to play a role in ensuring that the desired end result is reached. From a management perspective, there was a concern that in an agile environment, there is the danger that without a project manager there is no one who takes ultimate responsibility for project delivery and above all, project failure (M1). This sentiment is in line with the fact that traditionally project managers have been held responsible for time, cost and quality aspects of IT projects (Sheffield & Lemetayer 2013). M-3 raised the importance of creating awareness around project risks: ‘...in an Agile world, there are people thinking that you do not need project management roles when using an agile approach, which I think is not correct... you still need to know what the overall milestones are, what the budget is, what the risks are, all arising project issues, managing the stuff that does not change.’ [M-3; male; head of PMO] The implementation team in general affirmed this view by stating that a project manager manages the delivery of the project in terms of time and that a project manager should remain aware of the project scope and overall quality. In addition, governance disciplines ensure proper compliance and conformance to project ceremonies and these are typically a PM role, even in an agile environment (I-5). In implementations of agile such as Scrum, the view is counter to that expressed by the participants: project managers have no project plans or time reporting, but instead rely on the frequent delivery cycle of the agile approach to show results (Schwaber 2004). There was a clear expectation from management that project managers must take responsibility for executive management reporting on agile projects (M-1; M-3). They were particularly adamant about this given the perception in the organisation that agile projects do not provide clear feedback to management stakeholders: ‘I think agile hasn’t done a lot of work on communicating transparent scope changes and agreements outside the project... because of the daily stand-ups, people know what’s going on, its crystal clear to the people around, but in terms of the person outside the project who’s giving the money and who wants to know every six months as an investment if their money is safe, how do roles such as Head of Business, CIO, and other senior business representatives get visibility in an agile project?’ [M1; male; IT executive] This perception, however, is contradicted by the nature of feedback in pure agile environments which takes the form of regular, rapid feedback to customers, developers and end-users through frequent releases of the working software (Cohn 2004; Rao et al. 2011). It would appear that at senior management level, the need to be personally present at feedback sessions is a pragmatic obstacle: ‘They (senior management) don’t have time in a day to come look at that level of details as sponsors.’ [M1; male; IT executive] It is clear from these findings that both the management and implementation teams value the governance role that project managers fulfil on agile projects, particularly with regards to project delivery, risk management, reporting and budgeting. This finding points to the dual nature of IS delivery methodologies in large organisations where organisations, while realising the need to embrace agile, still have a need for the structure provided by traditional development approaches (Vinekar et al. 2006). In terms of interacting with project stakeholders, this implies that project managers are challenged to function in both paradigms as revealed in the discussion that follows. **Interacting with the implementation team** A key change in an agile environment as opposed to a traditional IS delivery approach is the way in which the implementation team operates. The agile manifesto prescribes individual interaction over process and tools, collaboration over contracts and being responsive rather than following a rigid plan (Fowler & Highsmith 2001). Agile IS development places a premium on people and their interactions. The emphasis is on teams and on the intense dynamics of team interactions (Vinekar et al. 2006). The findings discussed in this section reveal how the project manager has to renegotiate their role within the implementation team based on the expectations of both management and implementation. The two sub-themes capturing this aspect are ‘Exerting control’ and ‘Serving as a coach and facilitator’. **Exerting control** Some of the management team members expect project managers to take on a command and control role; whereas, the implementation team spoke out against such form of restriction. Three of the management team members (M-1; M-2; M-3) supported the existing culture of control in the organisation and expected project managers to help sustain this approach in an agile environment: ‘If you’ve grown up in the waterfall world, you’ve come up through the ranks there, you know what the issues are, you know that you got to pay attention to this and that, to make this go right.’ [M-2; male; IT consultant] ‘Say you have a team of 10 where eight pull their weight whereas the other two do not, but you as a project manager don’t actually take on those two, if the other eight see that they are getting away with it, then the whole team takes a fall.’ [M-3; male; head of PMO] ‘One of the tenets of project management is meant to be a mechanism that identifies the goal and helps the community achieve that goal. So it implies an element of drive to control people by saying yes/no.’ [M-1; male; IT executive] Only one manager [M-4] disagreed by expressing that command and control are not necessary in an agile environment since the team manages itself. This expectation from management that seems to contradict the agile principles was pointed out by Cockburn and Highsmith (2001) who stated that PM style in a traditional waterfall organisation tends to exert command and control over developers. They further emphasise the importance of collaborative decision-making to improve agility within business. They believe that a command and control approach inhibits agility and ultimately fails the implementation teams in an agile environment (Cockburn & Highsmith 2001). IS professionals and project managers are knowledge workers. They often identify themselves in terms of their area of expertise and not the organisations they work for. Therefore, organisations who apply a ‘boss and subordinate’ approach to PM may face the risk of losing these professionals (Srikantaiah et al. 2010:4). Augustine et al. (2005) echoes this sentiment by stating that ‘skilled professionals don’t adapt well to micromanagement’. Contrary to management’s view, the implementation team felt strongly that there is no place for command and control from project managers in the agile space: ‘Styles like micro management, authoritative, that doesn’t work. Being too detail oriented. You can’t be too set about what you’re actually going to get, you have to be flexible, you have to be able to change.’ [I-2; female; business analyst] ‘The ones that are not succeeding with agile are those that are very commanding and controlling. If you going to have someone that is commanding and controlling, who will end up pushing people to do things, people are going to leave.’ [I-3; male; developer] I-3 argued that a project manager needs to be the person who works for the project team, one who is the eyes and ears of the people in the team and is expected to fight battles for the team, attending meetings with management to ensure they get the ‘stuff required to keep the project running’. Styles like micromanagement and being authoritative do not work in agile environments (I-2). I-4 supported this view: in a waterfall project, project managers can be authoritative or dictatorial, but in their experience this does not work in an agile environment since the project structure is less... hierarchical, allowing the implementation team members to by-pass the project manager by, for example, speaking directly to business. This is in fact aligned with the notion of self-organising, autonomous teams as advocated in an agile approach (Schiel 2010). Project managers should realise that in an agile environment and specifically where the scrum methodology is used, it is expected of individual team members to be autonomous and self-organising (Augustine et al. 2005). As a result, some of the typical PM tasks are shared by the team members (Hoda & Murugesan 2016), and they should, therefore, become one of the team members instead of elevating themselves above the team. This approach can be achieved by project managers attending daily check-in sessions with the implementation team to show interest and to keep up to date with what is happening in the team (I-4). I-8 felt that project managers have the responsibility to win the trust of the teams so that the team chooses to listen to them: ‘It is not all about you sitting there and doing the mantra and going on trying to get everyone to listen to you, it is about being able to be bigger than the room, and if you are not, you’re going to struggle. Agile works where you have very high trust between the development organisation and the client or the management group.’ [I-8; male; developer] Developing and sustaining a sense of trust has been shown to be at the root of agile success (Schwaber 2004). Furthermore, trust may lead to balanced decision-making that could potentially shift the balance of power from management to the development teams (Nerur et al. 2005). Trust also encourages continuous learning, spontaneity and creativity and working together towards a single goal (Moe, Dingsøyr & Dybå 2009). I-4 makes a strong case for why command and control by project managers in an agile environment is not a good idea. I-4 believes that each team member has enough authority to make decisions on what should happen in their teams. If the team is unable to think for themselves, it will not be able to ‘catch some of the things that the project manager drops’, which will ultimately impact the ability of the project manager to deliver on project expectations. Learning and growth is also stunted through command and control management style: ‘...if the team does only what the project manager tells them to do, the role of management or lead can never be any better in such a team.’ [I-4; male; test manager] This sentiment is echoed by Cockburn and Highsmith (2001), who emphasise the importance of collaborative decision-making to affect agility within business, and also speaks to the notion of autonomous, self-management teams (Hoda & Murugesan 2016). This tension that seems to exist between the view of the management team and that of the implementation team on the level of control that a PM should exert in an agile environment can perhaps be ascribed to the difficulty of letting go of power. Nerur et al. (2005) warn that the shift from authoritative manager would challenge those attitudes and culture of people who enjoy authority, making implementing agile difficult. If project managers could step back from the command and control paradigm and allow agile teams to self-organise, there could be benefits: ‘If one allows the team to sort themselves out, they realise that they have to work extra as a team and that they do not need to worry that much, because they do not feel forced to do it, or they do not have to do it all the time.’ [M-4; male; methodology lead] Schiel (2010) and Dybå & Dingsøyr (2008) support this view, stating that self-organising teams are empowered to organise work tasks in a way that give them common ownership. However, Lalsing, Kishnah and Pudaruth (2012) and Goodpasture (2015) highlight the fact that self-organisation is an advanced skill and that it may be difficult to recruit staff capable of forming self-organising teams. In addition, agile practitioners may encounter team-level and organisational-level barriers to self-organisation. On a team-level, these barriers include lack of individual commitment and leadership as well as failure to learn, while pressure to work on multiple projects in parallel, organisational control and organisational insistence on specialists feature account for the organisational-level barriers. These barriers can be overcome by organising cross-training to create generalists, collocating teams in the same room, building trust and commitment and assigning people to one project at a time (Moe et al. 2009). Ultimately, it seems that a balance is needed between the need for an agile environment and the realities of a large corporate (Vinekar et al. 2006). I-5 sums up why there is still a need for a project manager in agile environments in large corporates: …pure agile from its original definition, particularly the scrum, cannot really work in agile environments, and this is the reason why there is disciplined agile development which is much more suited for a corporate environment. They see the role of a project manager as someone who ensures that the team: ‘sticks to some of the culture that exists in project management.’ [I-5; male; IT architect] **Serving as a coach and facilitator** If command and control is not appropriate in an agile environment, then what are the alternatives? The implementation team articulated the need for a project manager to act more like a coach and facilitator. I-7 stated that in an agile environment, a project manager must be a people’s person and relationship driven. It is not about trying to build friendships, but rather about motivating people to work together (I-7). This is in line with research that shows that project managers need to actively target the development of trust through activities that will build it (Sauer & Reich 2009). This type of involvement starts with the project manager being more present: ‘Project managers should be involved. Our team has seen some who are not, where they throw things over the wall and instruct developers or analysts to do their work, and return after the allocated time to report on the progress. The project managers must be involved daily to know what is going on.’ [M-3; male; head of PMO] Closer involvement could pose a challenge however. Sauer and Reich (2009) state that the complexity involved in contemporary IS projects implies that project managers may not be sufficiently knowledgeable in all the relevant aspects of the project. On top of this, project managers typically have a heavy workload and may not have the time to be closely involved. Project managers must act as facilitators and must acquire facilitation skills (I-1; I-8). Facilitation skills would assist project managers in their mediation role when, for example, they need to negotiate overtime work with both the people who have to conduct the work and the sponsors who have to pay them (I-8). This is in line with the notion that project managers must increasingly take ownership of business goals and identify with the business issues instead of merely accepting that business goals are owned by a sponsor (Sauer & Reich 2009). They have to alter their behaviour to that of a facilitator, someone who does not direct but coordinates individuals in a team (Nerur et al. 2005). Contrary to what most organisations believe, project success does not depend only on aspects such as project schedules, budgets and deliverables. Knowledge sharing and cooperation through facilitation play an important role in project knowledge management, which in turn has a positive bearing on project success (Srikantaiah et al. 2010). In addition to being a skilled facilitator, I-3 saw the PM role as a humble and serving one, able to work collaboratively with the team and stakeholders to get the job done. I-8 illustrated this type of leadership style by referring to examples of leaders such as Clem Sunter, the Dalai Lama and Nelson Mandela, and referring to them as the kind of individuals who had the characteristics required of a leader of an agile project. Kniberg and Skarin (2010) state that agile teams require coaching rather than management. This is supported by Goodpasture (2015), who agrees that coaching skills are important for project managers. Not all management participants agreed. M-4 argued to the contrary, pointing out that in an agile environment, the role of servant to the team should be fulfilled by the scrum master and not the project manager. Whether serving or not, I-8 believed that project managers should in any case instil energy and stimulate interest in people, all the while keeping their eye on the project plan to ensure that scope is delivered within the planned timelines and in accordance with the relevant quality assurance processes. Not everyone was convinced that project managers need to change their behaviour in an agile environment. M-1 argued that the traditional project manager role in the waterfall environment also required coaching and facilitation skills. M-1 believed that the same PM style could work in both waterfall and agile environments. I-4 tended to agree, but stressed that ‘one could probably more easily get away with certain styles in waterfall that would not do so well in agile’. Perhaps the reason for the apparent conflict in expectations from project managers is that ‘project managers apply traditional methods on projects for which they are not suited because they must align their efforts with broader organisational expectations’ (Sheffield et al. 2013:469). It may also be the case that agile software development drives agility in PM (Stettina & Horz 2015) and that with time, adjustments will be made. Managerial implications and recommendations From the findings above, the following recommendations are made: • Be cognisant of the level of agile maturity in the organisation to gauge the balance between traditional PM and APM needs. This can be achieved by using one of the available agile maturity frameworks to assess the current level of agile maturity and by putting in place improvement initiatives. • Facilitate open conversations between project managers and their stakeholders to create understanding of what is expected from the project manager role. This can be achieved through facilitated workshops as well as informal discussions initiated by management. • Train project managers in the art of coaching and facilitation to improve their ability to lead agile teams. Numerous coaching and facilitation courses are available through professional services providers and could be formally incorporated into career development and training programmes. Conclusion Knowledge is created through the creation of software during IS implementation projects; however, IS project implementation success is low (Bailin 1997; Savolainen et al. 2012; Shongwe 2015). In an attempt to address the disappointing track-record of successful IS implementation projects, organisations are moving towards agile project implementation methodologies, where reduced formality and increased individual autonomy and self-organising teams are advocated. This shift encroaches on the territory of the traditional project manager role and raises questions about how project managers should adapt in order to remain relevant. This case study provided insights into the desired behaviours of an agile project manager by exploring the different needs of the management and implementation teams. It would appear that project managers are stuck between a rock and hard place when it comes to fulfilling management and implementation teams’ expectations. On the one hand, they are required to fulfil the traditional PM role: both the management and implementation teams require project managers to adhere to classic PM governance functions such as project delivery, risk management, reporting and budgeting. On the other hand, when it comes to the management of the implementation team, the management team preferred a more traditional command and control style project manager. The implementation teams, however, favoured a more agile approach where they expect a project manager to earn their trust, refrain from micromanagement, allow the team to self-organise and act as coach and facilitator. The conclusion drawn from these findings is that, for project managers to remain relevant in the move towards a more agile IS implementation environment, they must become aware of the different expectations of their various stakeholders and adapt their behaviour accordingly. They need to engage openly with their stakeholders to understand their needs, acquire new skills required in the agile environment such as coaching and facilitation, and strike a balance between employing traditional and APM skills depending on the agile maturity of the organisation. Limitations and recommendations for future research This case study was limited to one IS department of a large South African insurance company. Future research should consider expanding the sample to include the views of more stakeholders in both the management and implementation teams and across various industries. It may also be beneficial to specifically obtain the views of project managers on how they see their role in an agile environment. Of the 13 research participants, 8 were from the implementation team and 5 from business. A more equal participant distribution would help ensure that the findings are not potentially distorted. To help address the issue of agile maturity, future research could develop an agility transition framework to help guide and monitor the move from traditional to agile. Acknowledgements Competing interests The authors declare that they have no financial or personal relationship(s) that may have inappropriately influenced them in writing this article. Authors’ contributions S.N. conducted the research as part of his MBA research project and co-wrote this article. N.T. was his research supervisor and co-wrote this article. References Chin, G., 2004, Agile project management: How to succeed in the face of changing project requirements, AMACOM Division of American Management Association, New York, NY. Cohn, M., 2004, User stories applied: For agile software development, Pearson Education: Addison-Wesley Professional, Boston, MA. Kniberg, H. & Skarin, M., 2010, Kanban and Scrum-making the most of both, C4Media, New York, NY. Merriam, S.B. & Tisdell, E.J., 2015, Qualitative research: A guide to design and implementation, John Wiley & Sons, Hoboken, NJ.
{"Source-Url": "https://sajim.co.za/index.php/sajim/article/download/806/1133", "len_cl100k_base": 8952, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32136, "total-output-tokens": 13205, "length": "2e13", "weborganizer": {"__label__adult": 0.00060272216796875, "__label__art_design": 0.0008797645568847656, "__label__crime_law": 0.0006814002990722656, "__label__education_jobs": 0.037841796875, "__label__entertainment": 0.00012069940567016602, "__label__fashion_beauty": 0.0003025531768798828, "__label__finance_business": 0.0134429931640625, "__label__food_dining": 0.0007867813110351562, "__label__games": 0.000957489013671875, "__label__hardware": 0.0005946159362792969, "__label__health": 0.0008988380432128906, "__label__history": 0.0004551410675048828, "__label__home_hobbies": 0.00023317337036132812, "__label__industrial": 0.0008678436279296875, "__label__literature": 0.0005674362182617188, "__label__politics": 0.0006895065307617188, "__label__religion": 0.0006303787231445312, "__label__science_tech": 0.00849151611328125, "__label__social_life": 0.0003025531768798828, "__label__software": 0.01165771484375, "__label__software_dev": 0.9169921875, "__label__sports_fitness": 0.0005483627319335938, "__label__transportation": 0.0008063316345214844, "__label__travel": 0.0004343986511230469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54162, 0.04496]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54162, 0.13956]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54162, 0.92666]], "google_gemma-3-12b-it_contains_pii": [[0, 1743, false], [1743, 7936, null], [7936, 14127, null], [14127, 19874, null], [19874, 24264, null], [24264, 30409, null], [30409, 36288, null], [36288, 42004, null], [42004, 49703, null], [49703, 54162, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1743, true], [1743, 7936, null], [7936, 14127, null], [14127, 19874, null], [19874, 24264, null], [24264, 30409, null], [30409, 36288, null], [36288, 42004, null], [42004, 49703, null], [49703, 54162, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54162, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54162, null]], "pdf_page_numbers": [[0, 1743, 1], [1743, 7936, 2], [7936, 14127, 3], [14127, 19874, 4], [19874, 24264, 5], [24264, 30409, 6], [30409, 36288, 7], [36288, 42004, 8], [42004, 49703, 9], [49703, 54162, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54162, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
6d057ad086392af8ea8e68be0dc1ef574b91220f
This chapter presents a case-study of how the abstractions and infrastructure discussed in Part III and Part IV of this dissertation may be applied to source code analysis and interactive code generation. The motivation for the case study prototype, an interactive unit test generator, is that current unit testing methodologies often result in poor test coverage due to rather ad-hoc approaches. Developers are encouraged to always write test cases, often as a driving force for new software features, or for systematic regression testing to avoid reintroducing known errors during software maintenance. This easily results in a development process focused around the individual test cases, rather than addressing the general requirements the cases are intended to represent. By expressing expected behaviours as axioms written in idiomatic Java code, it may be possible to improve the quality of the test code. Each axiom captures a design intent of the software as a general, machine checkable rule, rather than an individual case, and may be used to generate unit tests. The quick and easy construction and integration of the test generator into an interactive development environment was made possible mainly due to the POM adapter technique described in Chapter 4 and the transformation runtime from Chapter 6. This chapter illustrates the applicability of the abstractions proposed in this dissertation, and, to a lesser degree, the workings of the generator tool. However, a detailed motivation and a discussion of the principal elements of the underlying test methodology are necessary before a detailed account of the transformation tool can be provided. The results in this chapter were obtained in collaboration with Magne Haveraaen. 11.1 Introduction Testing has gained importance over the past decade. Agile methods see testing as essential in software development and maintenance. Testing is given many different roles, from being the driving force of software development, to the somewhat more modest role of regression testing. The most influential of these is probably Beck’s extreme programming (XP) method [Bec98], giving rise to the mind-set of test-driven development (TDD) [Bec02]. TDD assumes that every software unit comes with a collection of tests. A new feature is added by first defining a test which exposes the lack of the feature. Then the software is modified such that all (both old and new) tests are satisfied. The process of extending software with a new feature can be summed up in five steps: (1) Add a test for a new feature which is written against the desired future API of the feature. (2) Run all tests and see the new one fail, but making certain that the unrelated features remain correct. (3) Write some code implementing the new feature, often just focusing on the minimal logic necessary to satisfy the tests written previously. (4) Rerun the tests and see them succeed. (5) Refactor code to improve the design. A problem with a strict TDD approach is that each test is often casewise, i.e., it only tests one case in the data domain of a feature. While this opens up for implementing the logic of a new feature in small steps – or to insert dummy code for just passing the test – it does not to ensure that the full feature will be implemented. For these reasons, refactoring assumes a prominent place. Using refactoring techniques, the feature implementation may be incrementally generalised from the pointwise dependencies required by the tests to the full logic required by the intended application. Many tools have been developed to support TDD. In Java, test-driven development is most often done using JUnit [BG], a Java testing framework for unit tests. Arguably, the focus on agile methods has taken the focus away from formal methods at large. This is somewhat unfortunate, as a substantial amount work has been done on effectively using formal methods as a basis for testing. For example, in 1981, the DAISTS system [GMH81] demonstrated that formal algebraic specifications could be used as basis for systematic unit testing. DAISTS identified four components of a test: Conditional equational axioms that serve as the test oracle; the implementation which must include an appropriate equality function; the test data cases, and a quality control of the test data (at least two distinct values). The Dais- tish system [HS96] took these ideas into the object-oriented world by providing a DAISTS like system for C++. The notation used for the axioms were also conditional equational based, giving a notational gap between the specification functions and the C++ methods. A more recent approach which also maintains this distinction is JAX (Java Axioms) [SLA02], which merges JUnit testing with algebraic style axioms. An automation was later attempted in AJAX using ML as notation for the axioms. These experiments suggest that an algebraic approach to testing may result in much more thorough tests than standard hand-written unit testing. The emphasis on general axioms rather than specific test cases is the key to better unit tests. This chapter builds on these above-noted ideas by introducing and describing several novel improvements which lead up to a tool-assisted method for implementing modular axiom-based testing for JUnit and Java. The two main contributions of this chapter include: - A case study of the practical application of program object model adapters, transformlets and the other infrastructure (Chapter 6) for expressing interactive program generation tools. - The detailed discussion of a generator tool, JAxT (Java Axiomatic Testing) [KH], which automatically generates JUnit test cases from all axioms related to a given class. Additionally, the tool construction resulted in several new techniques and developments related to the research in testing pursued by Haveraaen [HB05], including (1) a technique for expressing as reusable axioms the informal specifications provided with the Java standard API; (2) a flexible structuring technique for implementing modular, composable sets of axioms for an API, which mirrors specification composition, as known from algebraic theory; and, (3) a discussion of practical guidelines for writing test set generators that exercise the axioms. ### 11.2 Expression Axioms in Java In the proposed approach, axioms are expressed as idiomatic Java code, not in a separate specification language, as is common with other approaches based on algebraic specifications. There are several benefits to this: First, the developers need not learn a new formal language for expressing the properties they want to test. Second, the axioms will be maintained alongside the code and restructuring of the code, especially with refactoring tools, will immediately affect the axioms. This reduces or prevents any “drift” between the implementation and the specifications. Third, code refactoring, source code documentation and source navigation tools may be reused as-is for expressing and developing axioms. Fourth, it becomes easy to write tools to automatically produce test cases from the axioms. This is important because axioms may easily contain errors. This makes early and frequent testing of axioms desirable. #### 11.2.1 JUnit Assertions The approach discussed here uses axioms to express invariants – assertions – about desired properties for an abstraction. The Java assert mechanism allows these prop- erties to be stated as boolean expressions. If the expression evaluates to false, the assert mechanism will fail the program by throwing an \texttt{AssertionError} exception. For testing purposes, the JUnit system provides a wider range of assertions than what \texttt{assert} offers. A notable difference is that the failure of one assertion terminates the immediately surrounding test, but not the remainder of the test suite. This allows a full set of tests to run, even if the first test fails. In addition, the JUnit assertions provide a detailed account of the error if the assertion does not hold, making it significantly easier to trace down what the problem can be. ### 11.2.2 Java Specification Logics In standard specification theory, such as that used in [GMH81], axioms are formed from terms (expressions) with variables (placeholders for values or objects). If a variable is not given a value in the axiom, e.g., by quantification, it is said to be free. Interpreting this in the context of a programming language, free variables of a term can be viewed as parameters to the term. This leads to the following definition: **Definition 6** An axiom method, or axiom, is a public, static method of type \texttt{void}, defined in an axiom class. The method body corresponds to an axiom expression (term), and each method parameter corresponds to the free variables of that expression (term). When evaluated, an axiom fails if an exception is thrown, otherwise it succeeds. It is recommended, but not required, that axioms use assertion methods in the style of JUnit, e.g.: ```java public static void equalsReflexive(Object a) { assertEquals(a, a); } ``` This axiom states the reflexive property for any object of class \texttt{Object} (or any of \texttt{Object}'s subclasses). The method \texttt{assertEquals(Object a, Object b)} is provided by JUnit and checks the equality of the values of the two objects using \texttt{a.equals(b)}. The axiom will also hold if \texttt{a} is the \texttt{null} object, since \texttt{assertEquals} safeguards against this case. Specifying the desired behaviour of exceptions is also straightforward, albeit significantly more verbose: ```java public static void equalsNullFailure() { Object a = null, b = new Object(); try { a.equals(b); // calling equals on the null reference fail(); // exception should have been raised } catch (Exception e) { // OK } } ``` Here, the effect of applying equals to a null reference is written as an axiom, named `equalsNullFailure`, with no free variables. The axiom states that, for any `a` and `b` where `a` is the null object, the expression `a.equals(b)` must raise an exception of type `NullPointerException`. (This is the Java semantics for invoking methods on null references.) **Expressive Power** In specification theory, one often asserts the expressive power of a specification logic [MM95]. The simpler logics have less expressive power, but have better meta-properties than the more powerful logics, i.e. reasoning about the logic is less difficult. It is beyond the scope of this chapter to provide a detailed classification and comparison of Java versus other specification logics, except for the following brief remarks. *Equational Logic* – The simplest logic for the specification of abstract data types – classes – is equational logic which asserts that two expressions are equal (for all free variables). This is intuitively captured using `equals` based on the `equals` method of Java. However, there are several theoretical problems here. First, in normal logic a term is composed of mathematical functions deterministically relating inputs to outputs. In stateful languages, such as Java, one may modify one or more arguments rather than returning a value. The function may be (semi-) non-deterministic or the result of a function may depend on an external state. Such methods are beyond standard equational logic. As long as a method is deterministic, it is mostly straightforward to reformulate the terms of an equation as a sequence of statements computing the two values to be checked against each other. Second, the `equals` method, on which `assertEquals` is based, may not be correctly implemented. So one should treat `equals` as any other method, and hence, also make certain it satisfies certain properties: it should be deterministic; it must be an equivalence relation (reflexive, symmetric, transitive); and, it should be a congruence relation, i.e., every method should preserve equality with respect to the `equals` methods. Fortunately, these are properties that can be written as Java axioms, and then tested\(^1\). For instance, one may repeatedly evaluate `equals` on two argument objects and ascertain that the `equals` should always have the same result as long as one does not modify the objects. Interestingly, the first two of these requirements are formulated in the Java API [Jav]. The last requirement will be discussed in section Section 11.4. Third, there may be no way of providing a relevant `equals` method for some class, e.g., a stream class. This is known as the oracle problem [GA95]. However, using properly configured test setups, these classes may be made mostly testable as well. *Conditional Equational Logic* – A more powerful logic is conditional equational logic. This allows a condition to be placed on whether an equality should be checked \(^1\text{Testing will never prove these properties, but will serve to instill confidence about their presence.}\) for. In Java axioms, this is done by using an if-statement to guard whether the assertEquals should be invoked. Axioms for symmetry and transitivity of the equals method use this pattern, e.g.: ``` public static void equalsSymmetry(Object x, Object y) { if (x != null && y != null) assertEquals(x.equals(y), y.equals(x)); } public static void equalsTransitive(Object x, Object y, Object z) { if (x != null && y != null && z != null) if (x.equals(y) && y.equals(z)) assertEquals(x, z); } ``` Here, an explicit null-check is required to avoid problems with null references in the assertions. Quantifier-free Predicate Logic – Quantifier free predicate logic is an even more powerful logic. It permits the expression of negations and conditionals anywhere in the logical expressions. This is trivially expressed using boolean expressions in Java. Full Predicate Logic – Full predicate logic causes a problem with the quantifiers. A universal quantifier states a property for all elements of a type. There is no coun- terpart in programming, although supplying arbitrary collection classes and looping over all elements will be a crude (testing style) approximation. Existential quantifiers may be handled by Scholemisation – given that there are algorithms for finding the value the quantifier provides. Java-style Axioms Java style axioms have a different expressive power and allow ex- pressing properties not captured by the standard logic. For instance, the distinction between an object and its reference is easily handled by JUnit assertions. Excep- tions and methods that modify their arguments, rather than returning a result, can also be dealt with easily. Further, statistical properties can be expressed, such as the well-distributedness requirement on the hashCode() method, or temporal properties related to processes and timings, even against physical clocks. The drawback to this extra expressive power is that one cannot immediately ben- efit from the theoretical results from the more standard specification logics. That is, the general theoretical results from algebraic specifications are not directly applicable to specifications written in a “Java logic”. This chapter will stick to the more value-oriented aspects of Java as a formal specification language, hopefully giving an indication of the intuitive relationship between this style and the standard specification logics. 11.3 Structuring the Specifications All classes in Java are organised in a strict hierarchy forming a tree with the type object at the top. A class may implement several interfaces. An interface may inherit several other interfaces. Further, a given class should satisfy the (informally stated) assumptions and requirements of each of its supertypes\(^2\). This is illustrated by the following simple class Position, which is used to index the eight-by-eight squares on a chess board: ```java public class Position implements Comparable<Position> { private int x, y; // range 0<=x,y<8 public Position(int a, int b) { x=a % 8; y=b % 8; } public int compareTo(Position q) { return x-q.x; // ordering only on X-component } public boolean equals(Object obj) { final Position q = (Position) obj; return x==q.x && y==q.y; } public int getX() { return x; } public int getY() { return y; } public int hashCode() { return 3*x+y; } public void plus(Position q){ x=(x+q.x) % 8; y=(y+q.y) % 8; } } ``` The method plus gives movements on the board, e.g., `k.plus(new Position(1,2))` for moving a knight `k`. The position class is a subclass of `Object` (which is implicitly inherited) and it implements `Comparable<Position>`. The intent is that the `Position` methods should satisfy all requirements given by its supertypes, i.e., those from the class `Object` and those from the interface `Comparable<Position>`, as well as any requirements given for `Position` itself. Institutions It would be desirable to write the requirements of classes and interfaces as sets of axioms, in a modular fashion, and then allow these sets to be composed soundly. The notion of institution [ST88] provides the mathematical machinery permit expressing requirements in modules called specifications. Each specification provides a set of axioms. Operations exist for building larger, compound specifications from smaller ones. The specification for a type can be extended with new methods, thus dealing with the extension of classes or interfaces by for example using inheritance. Axioms may be added to a specification, for example to provide additional requirements for a subtype. The union of specifications may also be taken. This allows the construction \(^2\)In Java terminology a type encompasses both class and interface declarations. Chapter 11. Code Generation for Axiom-based Unit Testing of a compound specification for all supertype axiom sets. The theory of institutions shows that one can safely accumulate any axioms from the supertypes as well as add new axioms for position. More importantly, it shows that this accumulation will be consistent and not cause any unforeseen interaction problems, as often is the case when one considers inheritance among classes. In this sense, the modularisation and composition properties of specifications are a lot more well-behaved than that of software code. In addition, framework of institutions provides a significant freedom in organising axioms so that they become convenient to work with. The method described in this chapter uses this freedom to allow a flexible and modular organisation of axioms alongside the class and interface definitions. 11.3.1 Associating Axioms with Types Axioms, in the form of static methods, are grouped into Java classes. This immediately integrates the axioms with all Java tools. During the development process, axioms will be refactored along the main code, e.g., when a method is renamed or the package hierarchy is modified. This is considerably more developer-friendly than using separate specification languages. **Definition 7** An axiom class is any class \(A\) which implements a subinterface of \(\text{Axioms}^{<T>}\), contains only axiom methods, its axiom set, and where \(T\) specifies which type the axioms pertain to. The name of a class providing axioms may be freely selected and placed in the package name space, but it must be labelled with an appropriate axiom marker. Labelling is done by implementing one of the predefined subinterface of \(\text{Axioms}^{<T>}\) to signify whether the axiom set is required or optional for \(T\) or its subtypes. **Definition 8** Required axioms are defined using the \(\text{RequiredAxioms}^{<T>}\) marker interface on an axiom class \(A\), and states that all axioms of \(A\) must be satisfied by \(T\) and all its descendants. Using this structuring mechanism, it is possible to group the required axioms for equals, introduced in Section 11.2.2, into a class \(\text{EqualsAxioms}\) which implements \(\text{RequiredAxioms}^{\text{Object}}\), by defining the methods equalsSymmetry, equalsTransitive and equalsNullFailure in this class (to complete the specification for equals(), axioms for testing reflexivity and determinism are also required). Similarly, hash code axioms may be captured as follows: ```java public class HashCodeAxioms implements RequiredAxioms<Object> { public static void congruenceHashCode(Object a, Object b) { if (a.equals(b)) assertEquals(a.hashCode(), b.hashCode()); } } ``` 11.3. Structuring the Specifications ```java public class PositionPlusAxioms implements RequiredAxioms<Position> { public static void associativePlus(Position p, Position q, Position r) { // compute pc = (p+q)+r; Position pc = new Position(p.getX(), p.getY()); pc.plus(q); pc.plus(r); // compute p = p+(q+r) q.plus(r); // destructive update p.plus(q); // destructive update assertEquals(pc, p); } public static void commutativePlus(Position p, Position q) { Position pc = new Position(p.getX(), p.getY()); pc.plus(q); q.plus(p); // destructive update assertEquals(pc, q); } } ``` Figure 11.1: Axioms requiring that `plus` is associative and commutative. Figure 11.1 shows some axioms for `Position`. Since `plus` modifies its prefix argument, a separate object `pc` is necessary to not modify `p` before it is used as an argument in the second additions (lines 9 and 15). This is not a problem for `q`, as it is not modified in the first additions. These axioms are destructive on the test data: the values of some of the arguments have been modified when the axioms have been checked (lines 8, 9 and 15). The axioms belong to the same package as the type the axiom is associated to, so `PositionPlusAxioms` should be in the same package as `Position`. When axioms are retrofitted to an existing API, this placement may not be possible. One solution is to place the new axiom classes in an identically named package, but with `juxt` as a prefix to the package name. Then the axioms for `Object` would go in a package `juxt.java.lang` and the axioms for the Java standard collection classes would end up in `juxt.java.util`. Figure 11.2 shows how all the axioms for the supertypes, together with the axioms for `Position`, specify the design intent for `Position`. Figure 11.2: Organisation of axiom sets for `Position`. Boxes with italics represent interfaces. `XaXioms`-classes depend on, or pertain to, the classes they describe, marked by a stippled arrow. Filled arrows indicate inheritance. ### 11.3.2 Optional and Inherited-only Axioms A close reading of the Java API documentation [Jav] shows that it not only contains requirements – the kind of axioms described in this chapter – but also a multitude of recommendations and class-specific descriptions. For instance, the description of `Comparable<T>` contains the phrase “strongly recommended” and other places use the phrase “recommended”. In the class `Object`, and some of its subclasses, such as enumerations, it is specified that reference equality is the same as value equality. **Definition 9** Optional axioms are defined using the `OptionalAxioms<T>` marker interface on an axiom class `A`, and states that all axioms of `A` pertain to `T`, but are optional for any descendant of `T`, unless the programmer of a subtype has requested these axioms specifically. If `A` is an optional axiom set of `T`, this set may be inherited to a descendant `D` of `T` by adding the marker interface `AxiomSet<A>` to an axiom class pertaining to `D`. The optional reference equality of `Object` is asserted by `equalsReference()` in the following axiom class, `ObjectEqualsReference`: ```java public class ObjectEqualsReference implements OptionalAxioms<Object> { public static void equalsReference(Object x, Object y) { if(x != null) assertEquals(x.equals(y), x.equals(y)); } } ``` This axiom class implements the `OptionalAxioms<T>` interface, and therefore, the method `equalsReference()` will not be required automatically by the subtypes of 11.3. Structuring the Specifications <table> <thead> <tr> <th>RequiredAxioms&lt;T&gt;</th> <th>required by type T and all descendants</th> </tr> </thead> <tbody> <tr> <td>OptionalAxioms&lt;T&gt;</td> <td>required by type T, but not its descendants.</td> </tr> <tr> <td>SubclassAxioms&lt;T&gt;</td> <td>required by all subtypes of T but not by T</td> </tr> <tr> <td>AxiomSet&lt;Ax&gt;</td> <td>import axiom set Ax</td> </tr> </tbody> </table> Table 11.1: Summary of axiom structuring mechanisms. Consider the following empty axiom class which states axioms pertaining to the class EnumDemo (not shown). ```java public class EnumDemoAxioms implements OptionalAxioms<EnumDemo>, AxiomSet<ObjectEqualsReference> { } ``` While the class `EnumDemoAxioms` does not specify any axioms itself (although it could), it does activate the axioms from the set `ObjectEqualsReference`; that is, the optional equality axioms are now required for `EnumDemo` (but none of its subclasses, since `EnumDemoAxioms` is itself marked optional). As expected, even though `Position` inherits directly from `Object`, the object equality axioms (`ObjectEqualsReference`) have no relevance since the axiom classes for `Position` do not implement the type `AxiomSet<ObjectEqualsReference>. It is sometimes necessary to state axioms that only pertain strictly to subclasses, and not originating from the base class which is exempt. This is done using subclass axioms. **Definition 10** Subclass axioms are defined using the `SubclassAxioms<T>` marker interface on an axiom class A, and states that all axioms of A pertain to all subtypes of T, but not T itself. Consider a (possibly abstract) class C which contains declarations and common methods for its subclasses where one may want to check as much as possible of C and be certain that all subclasses satisfy all axioms. By marking some axioms with `SubclassAxioms<C>`, these will not be tested on class C itself, but will be checked on all subclasses of C. Table 11.1 summarises the structuring mechanisms for axioms. These are used in the small example class hierarchy depicted in Figure 11.3. It is important to note that as all these relationships are formally marked in the code, they can be discovered automatically by a tool, see Section 11.6. Figure 11.3: Organisation of axioms. The notation $x \models \mathbf{Ax}$ means that class $x$ satisfies all the axioms in the axiom list $\mathbf{Ax}$. 11.4 Java API caveats Capturing the informal requirements described in the Java API documents into machine-checkable axioms is a non-trivial exercise. This section documents some of the challenges encountered in this process. 11.4.1 Override and Overload Most object-oriented languages, including Java, distinguish between overriding a method and overloading a method. Method overriding occurs when a subclass redefines a method defined in one of its superclasses, thus altering the behaviour of the method itself without changing the class interface. On the other hand, method overloading allows the same method name to be reused for different parameter lists, e.g.: ```java public class Test extends Object { int x, y; public boolean equals(Object obj){ return x==((Test)obj).x && y==((Test)obj).y; } } ``` The class `Test` provides two overloaded methods where the first overrides the `equals` method from `Object`. Consequently, the axioms for `equals()`, defined in `Object`, will be applied to this method, while the `equals` method in lines 7-9 will not be tested. This might be surprising, but follows from the semantics demonstrated by the following method calls: ```java Object o1 = new Object(); Object c1 = new Test(); Test c2 = new Test(); o1.equals(c2); // from Object c1.equals(c2); // from lines 3–6 c2.equals(c2); // from lines 7–9 ``` The overloaded version will only be called if both arguments have the declared type `Test` (or a subclass thereof). In a language with templates, like C++, one could use genericity to apply the axioms to any one of such overloaded methods. Unfortunately, this is not possible using Java generics (since the type erasure of a generic `<T> boolean equals(T o)` would be identical to `Object.equals(Object o)` defined in `Object`. This is forbidden by the type system.) so the axioms for `equals` will have to be redeclared for class `Test` if they are to be applied to the second `equals` (line 7-9). ### 11.4.2 `clone` and Other Protected Methods The `clone()` method is declared as a protected method of `Object`. This makes it problematic to specify axioms because a protected method is only accessible to subclasses. Since `clone()` is protected, it is not possible for an axiom method to invoke it on an object of type `Object` and, consequently, it cannot be adequately described. This is unfortunate, because the API documentation lists an important number of recommended (optional) axioms for `clone()`. In principle, the same limitation holds for any protected method and is shared by any testing approach where the testing methods are defined in a class separate from the class being tested. In Java, it is possible to circumvent this limitation by declaring the testing class in the same package as the tested class – classes in the same package may invoke each other’s protected methods without being in a subtype relationship. For `clone()`, which is defined in the immutable package `java.lang`, axioms must be written specifically for every class that makes `clone()` public. The axioms will then apply to all subclasses of this class, but not to any other class that exposes the Chapter 11. Code Generation for Axiom-based Unit Testing clon() method. Following the previous remarks, it is not possible to circumvent this restriction using Java generic method declarations. In the instance of clone(), another attractive possibility would be to state the axioms for the interface Cloneable, Cloneable is defined in the Java language specification as a marker interface used to enable the cloning algorithms encoded in the clone() method from Object. Unfortunately, Cloneable is an empty interface which does not (re)declare clone() as a public method. 11.4.3 The equals “congruence” relation In the standard approaches to equational specifications, the equality test is a congruence relation. This is an equivalence relation which is preserved by all methods. Preservation means that, for any two argument lists to a method, if the arguments are pairwise equal, then the results of the two method calls also must be equal. In Java, the equals method defined in Object is close to, but not entirely, a congruence relation. If it were, for any two objects a and b, a.equals(b), then - a.hashCode() == b.hashCode(), which is a required axiom in the Java API, - (a.toString()).equals(b.toString()), but this is not an axiom in the Java API. Unfortunately, this means that one of the central means for closely relating Java to equational specification theories is unavailable. For equals to have the congruence property, it would have to be implemented as a form of “meta property”, requiring axioms to be written for every new method. Remember that overridden methods inherit such properties from the superclass. A tool like JAxT could easily handle this by either explicitly declaring the needed congruence axioms, or tacitly assuming them, thus generating the necessary test code even without making the axioms explicit. Even without tool support in the current iteration of JAxT, maintaining a congruence relation is a strongly recommend practice wherever feasible. Future releases may add facilities for handling the congruence property. 11.5 Testing The previous section described how to express and organise axioms in and for Java. These axioms can be used as test oracles for ensuring implementations exhibit the intended behaviour. For the testing setup to be complete, relevant test data must be provided. During testing, each data element from the test data set will be provided for the relevant free variables of the axioms. ### 11.5. Test Data Generator Methods A test data set may be as simple as the data from a typical test case, as practised by typical agile or test-driven development. For the many cases where additional testing is desired, the JAxT framework has an infrastructure that opens up for a much more systematic approach to test data. Creating the test is the responsibility of the developer. The first step towards creating a test set is to implement a test set generator. The JAxT generator wizard, explained in Section 11.6, will provide a test set generator stub on the following form, for a class $X$: ```java public class XTestGenerator { public Collection<X> createTestSet() { return null; } } ``` The developer must fill in the `createTestSet()` method with code that produces a reasonable test set of $X$ objects. This can be data from an existing test case, or a static collection of test values. For `$X_{Position}$` objects, the following test set generator method, placed in the class `$X_{PositionTestGenerator}$`, produces a collection of random, but valid, `$X_{Position}$` objects: ```java public static Collection<XPosition> createTestSet() { final int size = 200; List<XPosition> data = new ArrayList<XPosition>(size); Random g = new Random(); for(int i = 0; i < size; i++) { data.add(new XPosition(g.nextInt(8), g.nextInt(8))); } return data; } ``` The random number generator provided by the Java standard library is used to produce test data. Some authors, such as Lindig [Lin05], have reported that random test data may be more effective than most hand-crafted data sets in detecting deficiencies. In a similar approach, discussed by Claessen and Hughes [CH00], algorithms are used for deriving random test data sets based on abstract data types. In the particular case of `$X_{Position}$`, the complete test data set of the 64 distinct position values could be provided, but for completeness, distinct objects with equal values due to the difference between equality on object references with equality on object values are also needed. If the objects of a class $x$ are very time-consuming to instantiate, one might consider implementing a test set generator method that extends `IncrementalTestGenerator`. This class provides a ready-made `createTestSet()` method that returns a collection which instantiates its elements on demand. The developer must implement the method $x$ `generateNewValue(int index)`, which must produce random objects of type $x$. The `IncrementalTestGenerator` takes care of memoization. Using `IncrementalTestGenerator` will not result in better total running times of the tests. It may, however, allow the developer to uncover errors earlier in the event of a failed axiom since no time is spent up front to generate a full test data set which will never be traversed entirely. A formulation using the `generateNewValue()` method may sometimes be cumbersome. For this reason, the use of `IncrementalTestGenerator` is optional. ### 11.5.2 Determining Test Set Quality The JAxT library offers a few simple, but powerful checks for properties of test sets. For example, there is a method that checks whether the provided collection has at least two distinct data values, similar to the requirements in [GMH81]. Another method can test whether there are at least three distinct objects with equal values – necessary if a transitivity axiom is to be exercised. The purpose of these checks is so developers can apply them to the test sets produced by their generators and ensure some degree of test set quality. The quality assurance of test sets usually occurs during the development of test set generators themselves. In general, it is not necessary to run test quality metrics as part of a test suite, provided that the developer has checked and acquired sufficient confidence in the test set generators. Additional checks, in particular statistical metrics which can be used to judge distribution characteristics, are scheduled for inclusion into JAxT, but how to best integrate existing test generation approaches is still an open problem. There is a wealth of material to chose from, however, such as [DO91, TL02]. ### 11.5.3 Running the Tests When combining the test data with the axioms to run the tests, there are several issues to take into account. Some of the axioms may be destructive on the data sets, so each test data element must be generated for each use. This is normally handled by fixtures in unit testing tools, but for efficiency reasons, one may choose to have test data generation in the test methods themselves. While this entails more verbose unit test methods, since the test methods will be automatically generated from the axioms and the test data generator methods, it presents no extra burden on the user. With automated test data generation, it becomes very easy to (accidentally) create large data sets. In general, larger test sets improve the quality of the testing, but run-times may become excessive when testing axioms with many arguments. In the example Position class, all tests for axioms with up to three free variables take less than a couple of seconds and are within the normal time-frame for repeated unit testing in the TDD approach. For more complicated axioms, such as checking that equals is a congruence for \( + \), a quadruple loop on the data set is required and takes about 30 seconds. While in the the edit-compile-run cycle, regular unit testing using large data sets is not ideal. The framework currently provides limited support for adjusting the data set sizes. Additional work independent of JAxT is required to allow the developer to flexibly tune the size, and to continuously vary the trade off between thorough testing versus short testing times. ### 11.5.4 Interpreting Test Results Writing code and axioms, and their associated tests is error-prone. For this reason, early and frequent testing is valuable. When a test fails, it only states that there is some mismatch between the formulated axiom and the implementation of the methods used in the axiom. It is important to remember that, at least in the beginning, errors can just as easily be in the axiom as in the code. Therefore, both must be checked carefully. As always, newly written pieces of code, whether a new axiom or a new class, are typically more likely to contain errors than legacy pieces that have already been thoroughly tested. ### 11.6 Test Suite Generation The techniques described in the previous sections are structured and formal enough for a tool to aid the developer in deriving the final unit tests and test set generators. The author has experimented with such automatic generation of unit tests from axioms by building a prototype testing tool called JAxT\[KH\] \(^3\). This section describes the findings from this prototyping experiment. Below is the test class generated by JAxT for Position, with comments removed: ```java public class PositionTest extends TestCase { private Collection<Position> testSetPosition; public PositionTest(String name) { super(name); } protected void setUp() throws Exception { super.setUp(); testSetPosition = ``` \(^3\)JAxT stands for Java Axiomatic Testing. Chapter 11. Code Generation for Axiom-based Unit Testing ```java PositionTestGenerator.createTestSet(); } protected void tearDown() throws Exception { super.tearDown(); testSetPosition = null; } public void testObjectReflexiveEquals() { for (Position a0 : testSetPosition) reflexiveEquals(a0); } public void testComparableTransitiveCompareTo() { for (Position a0 : testSetPosition) for (Position a1 : testSetPosition) for (Position a2 : testSetPosition) transitiveCompareTo(a0, a1, a2); } } ``` The following sections will explain how this test case was derived. ### 11.6.1 Generating Tests from Axioms The task of JAxT is to automatically derive unit tests for a given class \( C \) using those axioms from the set of all axioms associated with \( C \) – each axiom induces a new unit test. The set of associated axioms can be found by inspecting the axiom classes associated with \( C \). When the axioms were created, the programmer clearly specified which axioms directly pertained to \( C \) by placing \( C \)'s axioms into those classes implementing the interface \( \text{Axioms<}> \). By placing the marker \( \text{Axioms<}> \) on an axiom class \( AX \), all (static) methods in \( AX \) are considered to be axioms for \( C \) and must therefore be fulfilled by an descendant of \( C \). This implies that \( C \) itself may have inherited axioms from a related superclass. For any (direct or indirect) superclass \( P \) of \( C \) or (direct or indirect) interface \( I \) of \( C \), one or more axiom classes may exist with \( \text{Axioms<}P> \) or \( \text{Axioms<}I> \) markers. Methods in these classes are also considered to be axioms associated with \( C \). #### Computing Axiom Sets In order to produce the final set of test methods for a class \( C \), all applicable axiom methods must be found. As suggested in Figure 11.3, axioms for all named types provided by \( C \), i.e. its superclasses, its or any of its supertypes' interfaces, are searched. 11.6. Test Suite Generation The algorithm `compute-axioms()` detailed in Algorithm 1 produces the final list of axiom methods for C. The resulting list is then fed into a test case generator. The test generator works as follows: First, it computes the required axiom sets of C; that is, all classes which implement `RequiredAxioms<C>` or `OptionalAxioms<C>`. Next, the supertypes of C are traversed and, for each, the set of subclass and required axioms are collected. After this is done, an initial set of axiom classes (i.e. axiom sets) is given in \( \Xi \). Note that the axiom classes themselves may pull in additional (optional) sets, via the `implements AxiomSet<AX>` mechanism. These optional sets are then added to \( \Xi \) and the final set of axiom sets is obtained. The methods of these axioms are the final product of `compute-axioms()`. **Algorithm 1 compute-axioms(C)** \[ \Xi := \text{required-axiom-sets-of(C)} \cup \text{optional-axiom-sets-of(C)} \\ \text{for } T \in \text{supertypes-of(C)} \text{ do} \\ \quad \Xi := \Xi \cup \text{subclass-axiom-sets-of(T)} \cup \text{required-axiom-sets-of(T)} \\ \text{end for} \\ \text{for } AX \in \Xi \text{ do} \\ \quad \Xi := \Xi \cup \text{super-axiom-sets-of(AX)} \\ \text{end for} \\ \text{return } \bigcup_{AX \in \Xi} \text{methods-of}(AX) \] **User interaction** Not all the details of the generation can be inferred from the source code, such as which package the generated test class should be placed in, though reasonable defaults can be suggested. To support various working styles and project organisations, the prototype offers a graphical generator wizard that allows user to optionally specify corrections to the assumed defaults. The user accesses the GUI to select a single class or multiple classes or packages to invoke the test generator. A wizard appears that allows the user to select which classes to generate test set generators for and where to place them. Further, the user can select which package the generated test case should be placed in and whether to generate a test suite, if tests for multiple classes have been requested. The user may also include additional axiom libraries that may contain relevant axioms for the type hierarchy at hand. For example, the JAxT library already provides axioms for `object` and `Comparable` that may be reused as desired. This axiom inclusion feature supports reuse of axiom libraries that may be distributed separately from existing implementations. A major benefit of this design is that axioms can be easily retrofitted for existing libraries, such as the Java Standard Libraries, and such Chapter 11. Code Generation for Axiom-based Unit Testing axiom libraries can be incrementally developed without modification or access to the source code of the original library. Thanks to JAxT, users can easily and incrementally update existing test classes, e.g., when axioms have been added or removed. By reinvoking JAxT, the test classes will be regenerated and additional test class stubs for new data types will be created. As Eclipse refactorings carry through to the axioms, the axioms will remain in sync with the code they test. Generation of Tests When the user requests axiom tests for a class X, a corresponding XAxiomTests is generated (name may be customised). This is a JUnit fixture, i.e. XAxiomTests derives from junit.TestCase, provides a set of test-methods and may provide a setUp() and a tearDown() method. The setUp() method initialises all necessary test sets, as follows: ```java setUp() { testSetT0 = T0Generator.createTestSet(); ... testSetTn = TnGenerator.createTestSet(); } ``` The createTestSet() methods return Collections which will be traversed by the tests. The exact set of TjGenerator calls is discovered from the argument lists of the axioms exercised by this test fixture. For each axiom ax(T0, ..., Tn) in axiom class A a testAax() method is generated, on the following form: ```java /** [@link package.of.A#ax(T0, ..., Tn)] */ public void testAax() throws Exception { for(T0 a0 : testSetT0) ... for(Tn an : testSetTn) A.ax(a0, ..., an); } ``` This test will invoke the axiom A.ax() with elements from the (random) data sets testSetTj. The generated Javadoc for testAax() will link directly to the axiom being tested. After all tests have been run, the tearDown() method is executed, which takes care of releasing all test sets: ```java tearDown() { testSetT0 = null; testSetTn = null; } ``` For convenience of this example, the pattern above has been idealised. The generator produces slightly different `setUp()` and test methods in cases where the data sets are not shared between all test methods. Consider the situation where a fixture has two test methods, `testCaseA()` which uses the data set `testSetA` only, and `testCaseB()` which uses `testSetB` only. Since `setUp()` is invoked before every test method, it is wasteful to initialise both `testSetA` and `testSetB` every time. Therefore, the generator will only put test sets which are shared among all test methods in `setUp()`. Local invocations to `createTestSet()` will be placed in the test methods for the other test sets. 11.6.2 Organising Generated Tests The test generator produces two types of generated artifacts: the test set generator stubs which are meant to be fleshed out by the programmer, and the unit tests, which are meant to be executed through JUnit and never modified. For this reason, it is recommended that unit tests are generated into a separate package, such as `project.tests`, to place them separately from hand-maintained code. If there are additional, hand-written unit tests in `package.tests`, placing the generated tests into a separate package, such as `project.tests.generated`, is preferred. The test set generator stubs are clearly marked as editable in the generated comments and the test cases are marked as non-editable. As previously stated, the generator can produce a suggested test suite based on a package or a project that lists all the tests requested in the same invocation of the test generator wizard. The purpose of these suites is to catalogue tests into categories and for the developer to be able to execute different categories at different times; some axioms may result in very long-running tests, others may not be interesting to test at each test suite execution and others still should be executed very frequently. Both the test suites and test set generator stubs are freely editable. The generator will never overwrite these artifacts when they exist, even if the developer accidentally requests it. 11.6.3 Executing Tests The current prototype tool supports two primary modes of test execution: from the command-line and through and interactive GUI. The command-line mode is intended for integrating the tests into nightly build cycles, or other forms of continuous integration. We impose no restrictions on the management of test suites – the tool merely aids in producing suggested starting points – so varying degrees of testing may be decided on a per-project or per-build basis. By organising the generated tests into categories, it becomes easy to select which sets of axioms should be exercised at any given build. The interactive mode reuses the JUnit framework. Once a test case (or test suite) has been generated, it can be immediately executed by the developer like any other JUnit test. Since there is a direct link in the Javadoc for every generated test method, it is trivial to understand which axiom is violated when a given test method fails. JAxT does not provide any test coverage analysis itself, but existing solutions for JUnit, such as Cobertura [Cob] and NoUnit [NoU], are applicable. 11.7 Implementation The JAxT tool is implemented as a plugin to the Eclipse development platform [Ecl]. It is divided into the component seen in Figure 11.4. The POM adapter technique, described in Chapter 4, and the Eclipse integration framework provided by Spoofax, as discussed in Chapter 9, were crucial for its construction. The test generator wizard is very similar to the one provided by JUnit: The JUnit test generator is applied to a class C and produces a testing stub for each method in C, whereas JAxT, when applied to C, produces an immediately executable test for every axiom pertaining to C. The wizard interface collects all necessary user input. When the forms are complete, the control passes to a compiled MetaStratego transformlet which implements all the analysis and generation logic. The script will use the Stratego/J FFI functionality to call into the Eclipse Compiler for Java and query for all necessary compilation units (.java and .class files). For compilation units that have source code, ASTs are extracted and adapted to terms using the POM adapter shown in Chapter 10. For “sourceless” compilation units, i.e. .class files, ECJ provides a read-only, high-level inspection interface. An additional POM adapter for this interface was implemented during the course of this work. Figure 11.4: Components of the JAxT generator. With this adapter, the JAxT transformation logic can traverse and inspect these compiler objects as well. This adapter provides a small signature for the notions of method and type bindings and by using it, many computations on the type hierarchy become very succinct and easy to express, e.g.: ```ecl ecj-all-supertypes-of = ?TypeBinding(_,_,superclass,superfaces,_,_) ; <collect-all(?TypeBinding(_,<id>,<id>,_))> [superclass | superfaces] ; map(!DottedName(<id>)) ``` This three-line strategy will compute the transitive closure of all super classes and super interfaces (i.e. supertypes) of a given class (or interface), using the generic traversal strategy `collect-all` provided by the standard Stratego library. The strategy produces a list of `DottedName` terms, that contains the dotted names (fully qualified names) of the supertypes. Note the the super type hierarchy may in the general case be a directed acyclic graph and is rarely a tree. Even if the plain Stratego language is used, this presents no problems. Potential problems due to term building in graphs, discussed in more detail in Chapter 7, are avoided by only providing a read-only interface. Traversals will always terminate since there are no cycles. Once the script has extracted the necessary information from the compilation units, it will assemble ASTs for what will become the final JUnit test class and the test generator stub classes. Each of these ASTs will be written to a separate file. The Eclipse code formatter will be used to pretty-print the result. This ensures that the final result is in accordance with the user-configured settings for the Java project in which JAxT is applied. ### 11.8 Discussion and Related Work The standard Java documentation, and that of many other software libraries, is rife with examples of formal requirements. These are typically not machine readable, nor machine checkable. As a result, these requirements are often inadvertently violated, often resulting in bugs which tend to be difficult to trace. The goal of program verification and validation techniques, including (unit) testing is to increase developer confidence in the correctness of their implementations. If a technique for formalising the library requirements was devised, and tests may be automatically generated from this formalisation, confidence that the library abstractions were used correctly would take a significant boost. This is what the JAxT, and other axiom-based techniques for test generation like JAX [SLA02], DAISTS [GMH81] and Daistish [HS96] do. The testing approach sketched in this chapter is a continuation of the JAX tradition [SLA02] and is, in a sense, a combination of two techniques for ensuring robustness of software: test-driven development and algebraic specifications. The ideas of axioms and modular specifications are taken from algebraic specifications as a way of succinctly describing general properties of an implementation, as opposed to the case-by-case approach normally advocated by TDD. Principles for integration into graphical development environments and the design of practical tools for test code generation and immediate execution of tests were inspired by TDD approaches. These principles allow the proposed method to bring instant feedback to the developer. Any approach to (semi-)automatic test generation will eventually have to be implemented using some programming language. Experience from this case study suggest that constructing the generator in a language-general, domain-specific transformation language has several benefits compared to the author’s previous experiences with implementing language processing using strictly general-purpose languages. **Succinctness** – Compared to an implementation in a general programming programming, the transformation logic expressed in Stratego becomes compact and to the point (c.f. more detailed examples in Chapter 10). If care is taken to name the strategies and rules appropriately, most of the transformations read fairly well. **Infrastructure Reuse** – The initial development of JAxT was very quick due to the reuse of the Eclipse Compiler infrastructure. Even if one accounts for the time taken to construct the additional POM (for type bindings), and the time spent constructing the AST POM itself, this was considerably less than the time necessary to construct a robust Java 1.5 parser from scratch, and much less than a stable type-checking frontend. The adapters were semi-automatically extracted from APIs in a matter of hours and, after a few more hours of plugging in the relevant FFI calls, the type checker was ready to be reused from within Stratego. **Development Tools** – A weak point of most non-mainstream languages is the state of their development tools. This is also the case with Stratego which, for example, lacks an interactive debugger. The Spoofax development environment (Chapter 9) helped a lot, but additional work is required if the environment is to have a level of quality similar to that of the mainstream language environments. **Code Templates** – Much of the generated code is composed from pre-defined templates. These are expressed using the abstract syntax of the ECJ, shown in Chapter 10. Both readability and maintainability would get a significant boost if these templates were expressed using concrete syntax, i.e. as concrete Java code. However, depending on the complexity of the templates, this would require the construction of a full Java 1.5 grammar which is embeddable into the transformation language. ### 11.9 Summary This chapter presented a case-study of how the techniques proposed in Part III and Part IV of this dissertation are applicable to code analysis and interactive code generation. The study presented a tool-assisted approach for testing general properties of classes and methods based on axioms and algebraic specifications expressed entirely in Java. It provided a detailed description of how desired program properties can be expressed as axioms written in an idiomatic Java style, including a rich and flexible mechanism for organising the axioms into composable modules (composable specifications). The proposed organisation mechanism is structured enough that the testing tool, JAxT, can automatically compute all axioms pertaining to a given class and generate JUnit test cases and test suites from the composition of these. By reexecuting JAxT periodically, the unit tests can trivially be kept in sync with the axioms, as these change. The study also discussed design and implementation aspects of JAxT, illustrating how the techniques proposed in this dissertation may be applied in practice. The results of the study suggests that language-general, domain-specific transformation languages provide an attractive vehicle for expressing interactive language processing problems, but that additional tool support may be necessary before most programmers can be expected to benefit from the increased succinctness offered by these languages.
{"Source-Url": "http://www.ii.uib.no/~karltk/thesis/11-code-generation-for-axiom-based-unit-testing.pdf", "len_cl100k_base": 12430, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 67443, "total-output-tokens": 13695, "length": "2e13", "weborganizer": {"__label__adult": 0.0003521442413330078, "__label__art_design": 0.00024700164794921875, "__label__crime_law": 0.00024628639221191406, "__label__education_jobs": 0.0009598731994628906, "__label__entertainment": 4.184246063232422e-05, "__label__fashion_beauty": 0.0001404285430908203, "__label__finance_business": 0.00014126300811767578, "__label__food_dining": 0.0002956390380859375, "__label__games": 0.0004012584686279297, "__label__hardware": 0.0003974437713623047, "__label__health": 0.0002636909484863281, "__label__history": 0.00016450881958007812, "__label__home_hobbies": 7.128715515136719e-05, "__label__industrial": 0.0002415180206298828, "__label__literature": 0.0002256631851196289, "__label__politics": 0.00021648406982421875, "__label__religion": 0.0004086494445800781, "__label__science_tech": 0.0017576217651367188, "__label__social_life": 8.279085159301758e-05, "__label__software": 0.002964019775390625, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0002987384796142578, "__label__transportation": 0.0004031658172607422, "__label__travel": 0.00020420551300048828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57758, 0.01314]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57758, 0.66394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57758, 0.88613]], "google_gemma-3-12b-it_contains_pii": [[0, 1749, false], [1749, 4939, null], [4939, 7439, null], [7439, 9893, null], [9893, 12999, null], [12999, 15439, null], [15439, 17822, null], [17822, 20562, null], [20562, 22444, null], [22444, 24213, null], [24213, 26412, null], [26412, 27399, null], [27399, 29743, null], [29743, 32205, null], [32205, 34358, null], [34358, 37089, null], [37089, 39532, null], [39532, 41585, null], [41585, 44217, null], [44217, 46113, null], [46113, 48876, null], [48876, 50726, null], [50726, 53618, null], [53618, 56569, null], [56569, 57758, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1749, true], [1749, 4939, null], [4939, 7439, null], [7439, 9893, null], [9893, 12999, null], [12999, 15439, null], [15439, 17822, null], [17822, 20562, null], [20562, 22444, null], [22444, 24213, null], [24213, 26412, null], [26412, 27399, null], [27399, 29743, null], [29743, 32205, null], [32205, 34358, null], [34358, 37089, null], [37089, 39532, null], [39532, 41585, null], [41585, 44217, null], [44217, 46113, null], [46113, 48876, null], [48876, 50726, null], [50726, 53618, null], [53618, 56569, null], [56569, 57758, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57758, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57758, null]], "pdf_page_numbers": [[0, 1749, 1], [1749, 4939, 2], [4939, 7439, 3], [7439, 9893, 4], [9893, 12999, 5], [12999, 15439, 6], [15439, 17822, 7], [17822, 20562, 8], [20562, 22444, 9], [22444, 24213, 10], [24213, 26412, 11], [26412, 27399, 12], [27399, 29743, 13], [29743, 32205, 14], [32205, 34358, 15], [34358, 37089, 16], [37089, 39532, 17], [39532, 41585, 18], [41585, 44217, 19], [44217, 46113, 20], [46113, 48876, 21], [48876, 50726, 22], [50726, 53618, 23], [53618, 56569, 24], [56569, 57758, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57758, 0.01276]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
a3c2311a89a1da0f7d1498f01afe94f5eff596e0
Making SQL Queries Correct on Incomplete Databases: A Feasibility Study Paolo Guagliardo School of Informatics The University of Edinburgh pguaglia@inf.ed.ac.uk Leonid Libkin School of Informatics The University of Edinburgh libkin@inf.ed.ac.uk ABSTRACT Multiple issues with SQL’s handling of nulls have been well documented. Having efficiency as its key goal, evaluation of SQL queries disregards the standard notion of correctness on incomplete databases – certain answers – due to its high complexity. As a result, it may produce answers that are just plain wrong. It was recently shown that SQL evaluation can be modified, at least for first-order queries, to return only correct answers. But while these modifications came with good theoretical complexity bounds, they have not been tested in practice. The goals of this proof-of-concept paper are to understand whether wrong answers can be produced by SQL queries in real-world scenarios, and whether proposed techniques for avoiding them can be made practically feasible. We use the TPC-H benchmark, and show that for some typical queries involving negation, wrong answers are very common. On the other hand, existing solutions for fixing the problem do not work in practice at all. By analyzing the reasons for this, we come up with a new modified way of rewriting SQL queries that restores correctness. We conduct experiments which show the feasibility of our solution: the small price tag it imposes can be often tolerated to ensure correct results, and we do not miss correct answers that the usual SQL evaluation produces. The overall conclusion is that correct evaluation can be realistically achieved in the presence of nulls, at least for the SQL fragment that corresponds to first-order queries. 1. INTRODUCTION The way incomplete information is handled in commercial DBMSs, specifically by SQL, has been heavily criticized for producing counter-intuitive and just plain incorrect answers [7, 9]. The reason behind this behavior is that SQL designers had first and foremost efficient evaluation in mind, but correctness and efficiency do not always get along. The standard theoretical approach to answering queries on incomplete databases, which is widely accepted as providing the right notion of answers, is to compute certain answers. These are query answers that do not depend on how the unknown data is interpreted. However, computing them is not easy, in fact coNP-hard for most reasonable semantics, if we deal with relational calculus/algebra queries [2]. SQL’s evaluation is very efficient: it is in AC⁰ (a small parallel complexity class) for the same class of queries, and thus it provably cannot capture certain answers. The gap in complexity is not yet a reason for undesirable behavior: one can easily imagine that SQL evaluation produces an approximation of certain answers. There are two ways in which certain answers and SQL’s evaluation could differ: - SQL can miss some of the tuples that belong to certain answers, thus producing false negatives; or - it can return some tuples that do not belong to certain answers, that is, false positives. False negatives can be accepted as the price to be paid for lowering complexity; after all, they just indicate that the correct answer is approximated. False positives, on the other hand, are outright wrong answers and therefore should not be tolerated. The problem with SQL is that it produces both kinds of errors. To see how false positives can be generated, consider a simple query computing the difference of two relations R and S, each with a single attribute A: ``` SELECT R.A FROM R WHERE NOT EXISTS ( SELECT * FROM S WHERE R.A = S.A ) ``` When \( R = \{1\} \) and \( S = \{\text{NULL}\} \), the output is \( \{1\} \), but it is not a certain answer. Indeed, if the missing value represented by \( \text{NULL} \) is interpreted as 1, the difference \( R - S \) is empty, and thus so is the set of certain answers. The reasons behind SQL’s incorrect behavior have their roots in the flawed three-valued logic approach it uses for handling nulls. Multiple attempts to fix it have been made in the past (see, e.g., [11, 15, 31]) although none of them came with formal correctness guarantees. Recently, [22] proposed a new approach to fixing SQL’s evaluation scheme that provided provable correctness guarantees. It showed how to translate a query \( Q \) into a query \( Q' \) such that: - false positives never occur: \( Q' \) returns a subset of certain answers to \( Q \); - data complexity of \( Q' \) is still AC⁰; and - on databases without nulls, \( Q \) and \( Q' \) produce the same results. Given the attractive theoretical properties of the approach, we want to understand whether these theoretical guarantees can work in practice. To this end, we need to answer two main questions: **Question 1.** Are false positives a real problem? Do they occur in real-life queries over databases with nulls? **Question 2.** Can algorithms that correctly evaluate queries on databases with nulls be efficiently implemented? What is the price to pay, in terms of query evaluation performance, for correctness guarantees? Since algorithms in [22] introduced extra steps to restore correctness, we do not expect, in general, to outperform native SQL evaluation, which was designed exclusively to optimize execution time. We can hope, however, that the overhead is sufficiently small. If this is so, one can envision two modes of evaluation: the standard one, where efficiency is the only concern, and an alternative, perhaps slightly more expensive one, that provides correctness guarantees. The difference is then the price of correctness, and we need to understand what it is. This work is the first feasibility study in this direction, and it produces promising results that warrant further investment into designing algorithms with correctness guarantees for larger classes of queries. Here we consider relational calculus/algebra queries and provide the following results. **False positives.** They are a real problem for queries that involve negation. We look at queries inspired by the TPC-H benchmark [28] and show that false positives are always present. Sometimes they constitute almost 100% of answers; for other queries, they quickly grow with the null rate (the probability that a null occurs in a given position), accounting for between 1.5% and 20% of answers when the null rate is 2% and rising to at least 15% even for the best behaved queries when the null rate is 10%. **Algorithms with correctness guarantees.** Those presented in [22] have a very good theoretical complexity but they cannot be implemented as-is due to the extensive use of Cartesian product in translated queries. To overcome this, we design an alternative way of transforming queries that still guarantees correctness while producing queries that are much more implementation-friendly. For positive queries (i.e., not involving negation in any form) and on databases without nulls, it coincides with the usual SQL evaluation. We then test our new algorithm on databases with nulls, for our sample TCP-H queries with negation, and observe two kinds of behavior. Sometimes, the performance of translated queries with correctness guarantees is very good, with the price of correctness – the overhead of executing the translated query – not exceeding 4% (and sometimes in fact making queries up to 10,000 times faster). For other queries, the translation may “confuse” the optimizer. That is, the optimizer generates plans with astronomical costs (although in some cases these are not reflected by the actual running time). The reason is that some conditions of the form \( R.A = S.B \) get translated into \( R.A = S.B \) OR \( R.A IS NULL \) to enforce correctness, but this causes the optimizer to badly overestimate the execution time, up to the point of resorting to nested-loop implementation of joins. However, we show that simple syntactic manipulations of queries can restore sanity to the optimizer, and result in reasonable performance, with translated queries running at roughly half the speed of the original ones. **Precision and recall.** We need to address them to say how good our evaluation techniques are. Precision refers to how many answers are certain, and recall to the proportion of certain answers returned. The formal correctness guarantees we prove imply that precision of our algorithms is 100% (while for some of the queries we experiment with, SQL’s precision is close to zero). It is easy to achieve high precision with low recall: just return nothing, and there are no false positives. Since SQL returns many more answers than necessary (false positives), it should not be surprising that sometimes it also returns a certain answer our procedure misses. However, in all our experiments, recall stands at 100%; our procedure returns precisely certain answers that are also returned by SQL evaluation. The key conclusion of this study is that achieving correctness in SQL query evaluation on databases with nulls is feasible. As we said, this is a proof-of-concept paper, whose goal is to demonstrate that the idea of rewriting queries to achieve correctness is implementable. We have done this for first-order queries and missing-information interpretation of nulls. Of course much more is needed before one can introduce a second, fully correct, evaluation mode for SQL queries (e.g., by saying \( \text{SELECT CERTAIN} \)); we shall discuss particular tasks (such as handling bag semantics, aggregates, non-applicable nulls, etc.) at the end of the paper. **Organization.** Basic notions and definitions are given in Section 2. Section 3 discusses the setup for our experiments. Section 4 presents experimental results on false positives in SQL query evaluation. Section 5 describes the translation of [22] and explains why it cannot be efficiently implemented. Section 6 presents our improved implementation-friendly translation. Section 7 contains the experimental results for the improved translation and describes the price of correctness for SQL query evaluation. Section 8 discusses extensions to cover other language aspects of SQL. The query translations used in our experiments are in the appendix. ## 2. PRELIMINARIES We consider incomplete databases with nulls interpreted as missing information. Much of the following is standard in the literature on databases with incomplete information, see, e.g., [1, 12, 14, 22, 30]. The usual way of modeling SQL’s nulls under this interpretation is to use Codd nulls which are a special case of the more general marked, or labeled, nulls. Databases are populated by two types of elements: constants and nulls, coming from countably infinite sets denoted by \( \text{Const} \) and Null, respectively. Nulls are denoted by \( \bot \), sometimes with sub- or superscripts. For the purpose of the general model we follow the textbook approach assuming one domain \( \text{Const} \) for all non-null elements appearing in databases. In real life (and our experiments), such elements can be of many different types, and those appearing in the same column must be of the same type. Adjusting results and translations of queries for this setting is completely straightforward. A relational schema, or vocabulary, is a set of relation names with associated arities. With each \( k \)-ary relation symbol \( S \) from the vocabulary, an incomplete relational instance \( D \) associates a \( k \)-ary relation \( S^D \) over \( \text{Const} \cup \text{Null} \), that is, a finite subset of \((\text{Const} \cup \text{Null})^k\). When the instance is clear nulls contains both tuples (1, ⊥) and (2, 3). The standard certain answers are exactly the null-free tuples in cert(Q, D) [22]. Definition 1. A query evaluation algorithm has correctness guarantees for query Q if for every database D it returns a subset of cert(Q, D). In other words, with correctness guarantees, false positives are not allowed: all returned tuples must be certain answers. Often our evaluation algorithms will be of the following form: translate Q into another query Q’, and then run it on D. If Q’(D) ⊆ cert(Q, D), we say that Q’ has correctness guarantees for Q. Some results concerning correctness guarantees are known. By naïve evaluation for a fragment of relational algebra we mean the algorithm that treats elements of Null as if they were the usual database entries, i.e., each evaluation ⊥ = c for c ∈ Const is false and ⊥ = ⊥’ is true if ⊥ and ⊥’ are the same element in Null. FACT 1 ([12, 14, 22]). Naïve evaluation has correctness guarantees for positive relational algebra, i.e., relational algebra without the difference operator and without disequalities in selection conditions. In fact it computes exactly certain answers with nulls. This remains true even if we extend the language with the division operator as long as its second argument is a relation in the database. Recall that division is a derived relational algebra operation; it computes tuples in a projection of a relation appearing in all possible combinations with tuples from another relation (e.g., “find students taking all courses”). SQL evaluation. For SQL, the evaluation procedure is different, as it is based on a 3-valued logic (3VL); see [9]. In particular, comparisons such as ⊥ = c, as well as comparisons between two nulls, evaluate to unknown, which is then propagated through conditions using the rules of 3VL. More precisely, selection conditions can evaluate to true (t), false (f), or unknown (u). If at least one attribute in a comparison is null, the result of the comparison is u. The interaction of u with Boolean connectives is as follows: ¬u = u, u ∧ t = u ∧ u = u, u ∧ f = f, and dually by De Morgan’s law for ∨. Then, σθ selects tuples on which θ evaluates to t (that is, t and u tuples are not selected). We refer to the result of evaluating a query Q in this way as EvalSQL(Q, D). FACT 2 ([22]). EvalSQL has correctness guarantees for the positive fragment of relational algebra. The positive fragment of relational algebra corresponds to the fragment of SQL in which negation does not appear in any form, i.e., EXCEPT is not allowed, there are no negations in WHERE conditions and the use of NOT IN and NOT EXISTS for subqueries is prohibited. 3. SETUP: QUERIES AND INSTANCES We have seen that what breaks correctness guarantees is queries with negation; the example used in the introduction was based on a NOT EXISTS subquery. To choose concrete queries for our experiments, we use the well established and common TPC-H benchmark that models a business application scenario and typical decision support queries [28]. Its schema contains information about customers who place orders consisting of several items, and suppliers who supply parts for those orders. There are also small relations describing geographical information (nations and regions). In terms of size, lineitem is by far the largest table, which records the items constituting an order and associated parts and suppliers, followed by order itself. Given the decision support nature of TPC-H queries, many of them involve aggregation. However, aggregation is not important for our purposes: if a tuple without an aggregate value is a false positive, it remains false positive when an extra attribute value is added. Since we only need to measure the ratio of false positives, and the relative change of speed in query evaluation, we can safely drop aggregates from the output of those queries. Most of the TPC-H queries do not have negation; they are essentially aggregate queries on top of multi-way joins. Two of them, queries 21 and 22, do use NOT EXISTS so we choose them for our experiments. We further supplement them with two very typical database textbook queries (slightly modified to fit the TPC-H schema) that are designed to teach subqueries. We provide all these queries below, to give the reader an idea of the features involved. Query Q1. It is a TPC-H query meant to identify suppliers who were not able to ship required parts in a timely manner. It returns suppliers from a given nation, and multi-supplier finalized orders (i.e., with status ‘F’) where the supplier was the only one who failed to meet the committed delivery date. The query is: ```sql SELECT s_suppkey, o_orderkey FROM supplier, lineitem l1, orders, nation WHERE s_suppkey = l1.l_suppkey AND o_orderkey = l1.l_orderkey AND o_orderstatus = 'F' AND l1.l_receiptdate > l1.l_commitdate AND EXISTS (SELECT * FROM lineitem l2 WHERE 12.l_orderkey = l1.l_orderkey AND 12.l_suppkey <> 11.l_suppkey ) AND NOT EXISTS (SELECT * FROM lineitem l3 WHERE 13.l_orderkey = 11.l_orderkey AND 13.l_suppkey <> 11.l_suppkey AND 13.l_receiptdate > 13.l_commitdate ) AND s_nationkey = n_nationkey AND n_name = $nation ``` where $nation is a randomly chosen value among the keys of table nation. Query Q2. It is another TPC-H query, which aims at identifying countries where there are customers who may be likely to make a purchase. It returns customers within a specific range of countries who have not recently placed orders but have a greater than average positive account balance. The query is: ```sql SELECT c_custkey, c_nationkey FROM customer WHERE c_nationkey IN ($countries) AND c_acctbal > (SELECT avg(c_acctbal) FROM customer WHERE c_acctbal > 0.00 ``` where $countries is a list of 7 distinct values randomly chosen among the keys of table nation. Query Q3. It is a classical textbook query that finds all orders supplied entirely by a specific supplier: ```sql SELECT o_orderkey FROM orders WHERE NOT EXISTS (SELECT * FROM lineitem WHERE l1.l_orderkey = o_orderkey AND l1.l_suppkey <> $supp_key ) ``` where $supp_key is a randomly chosen value among the keys of table supplier. Query Q4. It is another standard textbook query illustrating a correlated subquery with NOT EXISTS. The subquery uses multiple relations and a complex join condition. It asks for orders not supplied by any part of a specific color by any supplier from a specific country. ```sql SELECT o_orderkey FROM orders WHERE NOT EXISTS (SELECT * FROM lineitem, part, supplier, nation WHERE 1_orderkey = o_orderkey AND 1_partkey = p_partkey AND 1_suppkey = s_suppkey AND p_name LIKE '%$'||$color'||'%' AND s_nationkey = n_nationkey AND n_name = $nation ) ``` Here $nation is a randomly chosen value among the keys of table nation and $color is a randomly chosen string from a list of 92 possibilities provided by TPC-H. Generating test instances. The TPC-H benchmark comes with its own standard tool, DBGen, for generating instances. However, DBGen generates only complete instances (i.e., without nulls) so we need to go over them and insert nulls to make them fit for our purpose. For that, we separate attribute into nullable and non-nullable ones; the latter are those where nulls cannot occur (due to primary key constraints, or NOT NULL declarations). For nullable attributes, we choose a probability, referred to as the null rate of the resulting instance, and simply flip a coin for each tuple to decide whether the corresponding value is to be replaced by a null. As a result, for each nullable attribute, the instance will contain a percentage of nulls roughly equal to the null rate with which nulls are generated. We consider null rates in the range 0.5%–10%. The smallest instance DBGen generates (in line with the prescriptions of the TPC-H standard) is about 1GB in size, containing just under 9 · 10^6 tuples. We shall measure the relative performance of our translated queries w.r.t. the original ones on instances of 1GB, 3GB, 6GB, and 10GB size. To estimate the amount of false positives in query answers, we shall generate a high number of incomplete instances, on which our sample queries are executed multiple times. False positives are detected by algorithms that are quite expensive. To speed up the process, and since in any case we are interested only in the percentage of false positives, for this experiment we use smaller instances. These are generated by means of a configurable data generator, DataFiller [8], and are compliant with the TPC-H specification in everything but size, which we scale down by a factor of $10^3$. All our experiments were run using a local installation of PostgreSQL 9.5.1 on a dedicated machine with an Intel Core i5-3470 quad-core CPU @ 3.20GHz and 8GB of RAM. Note that since we measure the percentage of false positive answers and the relative performance of our scheme for obtaining correct answers, the exact hardware configuration and choice of a DBMS are of less importance. ### 4. HOW MANY FALSE POSITIVES? A false positive answer is a tuple that is returned by SQL evaluation and yet is not certain; that is, the set of false positives produced by a query $Q$ on a database $D$ is $Q(D) - cert(Q, D)$. They only occur on databases with nulls (on complete databases, $Q(D) = cert(Q, D)$); a simple example was given in the introduction. Our goal now is to see whether real-life queries indeed produce false positives. For this, we shall run our test queries on generated instances with nulls and compare their output with certain answers. However, certain answers are expensive to compute: the problem is $coNP$-hard for queries with negation, thus a naive computation will require exponential time. As a way around it, we design specialized algorithms to detect (some of the) false positives for queries $Q_1$-$Q_4$, and compare their results with SQL outputs. This will tell us that at least some percentage of SQL answers are false positives. #### Algorithms for detecting false positives. Such algorithms take as input the bindings for the parameters of the query, a database $D$ and an answer tuple $\bar{a}$, and return $true$ if $\bar{a}$ is a false positive, thereby giving us a lower bound on the number of false positives. The underlying idea is the same for all queries: we look for the presence of null values in relevant comparisons involving nullable attributes, because in such a case the comparison can be made true or false at need in order to falsify the answer tuple. For instance, to falsify an order id $k$ in the answer to query $Q_3$, we look for a tuple in table `lineitem` where the value of attribute `l_orderkey` is $k$ and the value of `l_suppkey` is null. Intuitively, if there is a lineitem for order $k$ where the supplier is unknown, then that supplier may well be different from the one specified by the value of parameter $\$supp_key$. Detecting false positives in query $Q_2$ is also simple: it suffices to check whether there is a tuple in table `orders` for which the attribute `o_custkey` is null; in that case, all answers produced by the query are false positives. Intuitively, if there is an order for which the customer is unknown, then that customer could be anybody, including the one in the answer tuple. For the remaining two sample queries, the pseudocode to detect false positives is given in Algorithms 1 and 2. #### False positives: experimental results. Recall that null values in instances are randomly generated: each nullable attribute can become null with the same fixed probability, referred to as the null rate. We let null rates range from 0.5% to 6% in steps of 0.5% and from 6% to 10% in steps of 1%. To get good estimates, we generate 100 instances for each null rate, and run each query 5 times, with randomly generated values for its parameters. At each execution, a lower bound on the percentage of false positives is calculated by means of the algorithms described above. The results of the experiment are shown in Figure 1, reporting the average over all executions. The outcome of the experiment shows that the problem of incorrect query answers in SQL is not just theoretical but it may well occur in practical settings: all of the queries we tested produce false positives on incomplete databases with as low as 0.5% of null values. In fact for some queries the percentage of false positives is very high: for $Q_2$, almost all answers are such, even when few nulls are present. For $Q_1$, at least a quarter of all answers are wrong when the null rate is just 3%, rising to at least half incorrect answers for the null rate of 8%. Other queries, such as $Q_1$ and $Q_4$, appear to be more robust (as we only find a lower bound on the number of false positives), but false positives are always present, even at the lowest null rate tested, and in fact they constitute at least 10% of all answers at null rates of 7% and above. The overall conclusion is clear: false positives do occur in answers to very common queries with negation, and account for a significant portion of the answers. #### 5. CORRECTNESS: A SIMPLE TRANSLATION Since false positives are a real problem, we want to devise evaluation strategies that avoid it, if correctness of query re- The translations of [22] are shown in Figure 2. Here adom refers to the query computing the active domain (the union of all projections on each attribute of all relations), and $\theta^*$ refers to the translation of selection conditions, which is defined inductively as follows: \[ \begin{align*} (A = B)^* &= (A = B) \\ (A = c)^* &= (A = c) \text{ if } c \text{ is a constant} \\ (A \neq B)^* &= (A \neq B) \land \text{const}(A) \land \text{const}(B) \\ (A \neq c)^* &= (A \neq c) \land \text{const}(A) \\ (\theta_1 \lor \theta_2)^* &= \theta_1^* \lor \theta_2^* \\ (\theta_1 \land \theta_2)^* &= \theta_1^* \land \theta_2^* \end{align*} \] While (1) and (2) ensure correctness guarantees for all relational algebra queries, and queries $Q^F$ and $Q^T$ have good theoretical complexity (AC0), they suffer from a number of problems that severely hinder their practical implementation. Crucially, they require the computation of active domains and, even worse, their Cartesian products. While expressible in relational algebra, the $Q^F$ translations for selections, products, projections, and even base relations become prohibitively expensive. Several optimizations have been suggested in [22] (at the price of missing some certain answers), but the cases of projection and base relations do not appear to have any reasonable alternatives. Yet another problem is the complicated structure of the queries $Q^F$. When translations are applied recursively, this leads to very complex queries $Q^F$ if $Q$ used difference. In fact we tried a simple experiment with the translations in Figure 2, and found that they are already infeasible for much smaller databases than the smallest TPC-H compliant instance: some of the queries start running out of memory already on instances with fewer than $10^3$ tuples. All this tells us that we need an implementable alternative, which we present next. 6. AN IMPLEMENTATION-FRIENDLY TRANSLATION To overcome the practical difficulties posed by the translation in Figure 2, we propose an alternative translation that is implementation-friendly and comes with sufficient correctness guarantees. In this translation, we do not produce a second query $Q^F$ that underapproximates certain answers to the negation of the query, which was the main source of complexity. To see what we can replace it with, note that in the $Q^F$ translation, $Q^F$ was only used in the rule for difference: tuples that are certain answers to $Q_1 - Q_2$ are those that are certainly answers to $Q_1$, and certainly not answers to $Q_2$. That necessitated working with the complex $Q^F$ translation. But we can use a slightly different rule: if a tuple is a certain answer to $Q_1$, and it does not match any tuple that could possibly be an answer to $Q_2$, then it is a certain answer to $Q_1 - Q_2$. The advantage of this is that the query that approximates possible answers can be built in a much simpler way than $Q^F$. For instance, for a base relation $R$, it will be just $R$ itself, as opposed to the complex expression involving adom we used before. We need to formally say what “(not) matching possible answers” means. To this end, we define approximations of possible answers and two matching-based semijoin operators. There already exists a notion of maybe-answers [2, 30] – answers that appear in $Q(v(D))$ for at least one valuation $v$ – but those can be infinite, and include arbitrary elements outside of adom$(D)$. What we need instead is a compact representation. Definition 3. Given a $k$-ary query $Q$ and an incomplete database $D$, we say that a set $A \subseteq \text{adom}(D)^k$ represents potential answers to $Q$ on $D$ if $Q(v(D)) \subseteq v(A)$ for every valuation $v$. A query $Q'$ represents potential answers to $Q$ if $Q'(D)$ represents potential answers to $Q$ on $D$, for every $D$. Obviously, there are trivial ways of representing potential answers: take, e.g., $\text{adom}(D)^k$. But we shall be looking for good approximations, just as we are looking for good approximations of $\text{cert}(Q, D)$, for which bad ones can also be found easily (e.g., the empty set). To express conditions involving matching, we shall need two semijoin operations based on unifiable tuples (see Definition 2). Definition 4. For relations $R$, $S$ over $\text{Const} \cup \text{Null}$, with the same set of attributes, the left unification semijoin is \[ R \bowtie S = \{ \bar{r} \in R \mid \exists \bar{s} \in S : \bar{r} \notin \bar{s} \} \] \[ R^t = R \] \[ (Q_1 \cup Q_2)^t = Q_1^t \cup Q_2^t \] \[ (Q_1 \cap Q_2)^t = Q_1^t \cap Q_2^t \] \[ (Q_1 - Q_2)^t = Q_1^t \setminus Q_2^t \] \[ (\sigma_\theta(Q))^t = \sigma_\theta(Q^t) \] \[ (Q_1 \times \bar{Q}_2)^t = Q_1^t \times \bar{Q}_2^t \] \[ (\pi_\alpha(Q))^t = \pi_\alpha(Q^t) \] \[ R^f = \{ s \in \text{dom}^{\text{str}}(R) \mid \exists \bar{r} \in R: \bar{r} \uparrow s \} \] \[ (Q_1 \cup Q_2)^f = Q_1^f \cup Q_2^f \] \[ (Q_1 \cap Q_2)^f = Q_1^f \cap Q_2^f \] \[ (Q_1 - Q_2)^f = Q_1^f \setminus Q_2^f \] \[ (\sigma_\theta(Q))^f = \sigma_\theta(Q^f) \] \[ (Q_1 \times \bar{Q}_2)^f = Q_1^f \times \bar{Q}_2^f \] \[ (\pi_\alpha(Q))^f = \pi_\alpha(Q^f) \] **Figure 2:** Relational algebra translations of \([22]\). \[ R \triangleright S = R - (R \kappa_S S) = \{ \bar{r} \in R \mid \nexists \bar{s} \in S: \bar{r} \uparrow \bar{s} \} \] These are similar to the standard definition of (anti) semijoin; we simply use unifiability of tuples as the join condition. They are definable operations: we have that \( R \kappa_S S = \pi_R(\sigma_{\theta_S}(R \times S)) \), where the projection is on all attributes of \( R \) and condition \( \theta_S \) is true for a tuple \( \bar{r} \bar{s} \in R \times S \) iff \( \bar{r} \uparrow \bar{s} \). The unification condition \( \theta_S \) is expressible as a selection condition using predicates \( \text{const} \) and \( \text{null} \). Note that, in this notation, \( R^t = \text{dom}^{\text{str}}(R) \triangleright S \). Now, we can see why queries that represent potential answers are useful. **Lemma 1.** Consider the translations \( Q \mapsto Q^+ \) given in Figure 3 by \((3.1)-(3.7)\), where the only assumption on \( Q^+ \) in \((3.4)\) is that it represents potential answers to \( Q_2 \). Then \( Q^+ \) has correctness guarantees for \( Q \). **Proof Sketch.** The proof is by induction on the structure of the query; here, we show the important case of set difference. Let \( Q = Q_1 - Q_2 \), let \( D \) be a database, let \( \bar{r} \) be in \( Q^+(D) = Q_1^+(D) \triangleright \pi_\alpha(Q_2)(D) \), and let \( v \) be a valuation on \( D \). We need to show that \( v(\bar{r}) \in Q(v(D)) \). As \( \bar{r} \) is in \( Q_1^+(D) \), we get that \( v(\bar{r}) \in Q_1(v(D)) \) by the induction hypothesis. Now, suppose that \( v(\bar{r}) \notin Q_2(v(D)) \). Since \( Q_2 \) represents potential answers to \( Q_2 \) by assumption, we have \( v(\bar{r}) \notin Q_2^+(v(D)) \). Hence, there exists a tuple \( \bar{s} \notin Q_2^+(D) \) that unifies with \( \bar{r} \) and, as \( \bar{r} \notin Q_1^+(D) \), this implies that \( \bar{r} \notin Q_1^+(D) \triangleright \pi_\alpha(Q_2)(D) \), which contradicts our assumption that \( \bar{r} \in Q_1^+(D) \triangleright \pi_\alpha(Q_2)(D) \). This shows that \( v(\bar{r}) \notin Q_2(v(D)) \) and, in turn, \( v(\bar{r}) \notin Q(v(D)) \). \( \square \) Given this lemma, our next goal is to produce a translation of queries that represent potential answers. As for queries \( Q^+ \), this can be done almost by mimicking the structure of queries and using a query with correctness guarantees when it comes to translating the difference operation. We also need to modify selection conditions: the new translation \( \theta \mapsto \theta^* \) is given by \( \theta^* = \neg(\neg\theta)^* \). Recall that negating selection conditions means propagating negations through them, and interchanging \( = \) and \( \neq \), and \( \text{const} \) and \( \text{null} \). For completeness, we give it here: \[ (A \neq B)^* = (A \neq B) \] \[ (A = c)^* = (A \neq c) \text{ if } c \text{ is a constant} \] \[ (A = B)^* = (A = B) \vee \text{null}(A) \vee \text{null}(B) \] \[ (A = c)^* = (A = c) \vee \text{null}(A) \] \[ (\theta_1 \lor \theta_2)^* = \theta_1^* \lor \theta_2^* \] \[ (\theta_1 \land \theta_2)^* = \theta_1^* \land \theta_2^* \] **Lemma 2.** Consider the translations \( Q \mapsto Q^* \) given in Figure 3 by \((4.1)-(4.7)\), where the only assumption on \( Q^* \) in \((4.4)\) is that it has correctness guarantees for \( Q \). Then \( Q^* \) represents potential answers to \( Q \). **Proof Sketch.** The proof is by induction on the structure of the query; here, we present two cases for illustration. When \( Q = Q_1 \cap Q_2 \), we have that \( Q^* = Q_1^* \cap Q_2^* \). Let \( \bar{r} \in Q(v(D)) \); then, by the induction hypothesis, \( \bar{r} \in v(Q_i(v(D))) \) for \( i = 1, 2 \). So, there are tuples \( \bar{r}_1 \in Q_1^+(D) \) such that \( v(\bar{r}_1) = v(\bar{r}_2) = \bar{r} \). Hence, \( \bar{r}_1 \uparrow \bar{r}_2 \) and thus \( v(\bar{r}_1) \in v(Q_i^+(D)) \) for \( i \in \{1, 2\} \). This shows that \( v(\bar{r}) \in Q^*(v(D)) \) and, in turn, \( v(\bar{r}) \in Q(v(D)) \). \( \square \) Lemmas 1 and 2 tell us that we can now combine the two translations in Figure 3 to obtain correctness guarantees. Using the lemmas, mutual induction on the expressions in Figure 3 shows the following. **Theorem 1.** For the translation $Q \mapsto (Q^1, Q^2)$ in Figure 3, the query $Q^1$ has correctness guarantees for $Q$, and $Q^2$ represents potential answers to $Q$. In particular, $Q^1(D) \subseteq \text{cert}(Q, D)$ for every database $D$. The theoretical complexity bounds for queries $Q^1$ and $Q^2$ are the same: both have the low AC0 data complexity. However, the real world performance of $Q^2$ will be significantly better, as it completely avoids large Cartesian products. As an example, consider the query $$Q = R - \langle \pi_{\alpha}(T) - \sigma_{\theta}(S) \rangle$$ and suppose it has arity $k$. Its corresponding translation $Q^1$ that follows the rules in Figure 2 is $$R \cap \left( \langle \pi_{\alpha}(\text{dom}^k T) - \pi_{\alpha}(\text{dom}^k \times_{\theta} T) \rangle \cup \sigma_{\theta^*}(S) \right)$$ while the translation $Q^2$ we propose is much simpler: $$R \times_{\theta} \langle \pi_{\alpha}(T) - \sigma_{\theta^*}(S) \rangle$$ and avoids Cartesian products of very large sets, computed multiple times, as in $Q^1$. We conclude this section with a few remarks. First, the translation of Figure 3 is really a family of translations. The proof of Theorem 1 applies to show the following. **Corollary 1.** If in the translation in Figure 3 one replaces the right sides of rules by queries - contained in those listed in (3.1)–(3.7), and - containing those listed in (4.1)–(4.7), then the resulting translation continues to satisfy the claim of Theorem 1. This opens up the possibility of optimizing translations (at the expense of potentially returning fewer tuples). For instance, if we modify the translations of selection conditions so that $\theta^*$ is a stronger condition than the original and $\theta^{**}$ is a weaker one, we retain overall correctness guarantees. In particular, the unification condition $\theta_3$ is expressed by a case analysis that may become onerous for tuples with many attributes; the above observation can be used to simplify the case analysis while retaining correctness. Second, the reason why queries $Q^2$ produce approximations of sets that represent potential answers is the same as for queries $Q^1$ to approximate certain answers, namely complexity. It can be easily seen that checking whether a set $A$ represents potential answers to a given query $Q$ on $D$ is in coNP, and for some queries the problem is coNP-hard as well. **Proposition 1.** There is a fixed query $Q$ such that the following problem is coNP-complete: given a database $D$ and a set $A$ of tuples over $\text{dom}(D)$, does $A$ represent potential answers to $Q$ on $D$? Next, we turn to the comparison of $Q^1$ with the result of SQL evaluation, i.e., $\text{Eval}_{SQL}(Q, D)$. Given that the latter can produce both types of errors – false positives and false negatives – it is not surprising that the two are in general incomparable. To see this, consider first a database $D_1$ where $R = \{(1, 2), (2, \bot)\}$, $S = \{(1, 2), (\bot, 2)\}$, and $T = \{(1, 2)\}$, and a query $Q_1 = R - (S \times T)$. The tuple $(2, \bot)$ belongs to $\text{Eval}_{SQL}(Q_1, D)$ and it is a certain answer, while $Q^1(D_1) = \emptyset$. On the other hand, for $D_2$ with $R = \{(\bot, 1)\}$ over attributes $A, B,$ and $Q_2 = \sigma_{A=B}(R)$, the tuple $(\bot, 1)$ belongs to $Q^2_2(D_2)$, but $\text{Eval}_{SQL}(Q_2, D_2) = \emptyset$. Nonetheless, in all our experiments $Q^1$ will always produce all of the certain answers returned by SQL. As a final remark, note that (4.3) in Figure 3 is not the only possibility, since intersection is a commutative operation, but left unification semijoin is not. Correctness guarantees hold if we replace the left unification semijoin with the right one that keeps unifiable tuples from the second argument. ### 7. THE PRICE OF CORRECTNESS Now that we have established the correctness of the translation $Q \mapsto Q^1$, our goal is to test it. For this, we take queries $Q_1-Q_4$ from Section 3, generate incomplete TPC-H instances, and then run $Q_1 - Q_4$ as well as their translations to compare their performance. **Translations of SQL queries** The translation $Q \mapsto Q^1$ was given at the level of relational algebra. While there are multiple relational algebra simulators freely available, we want to carry out our experiments using a real DBMS on instances of realistic size (which rules out relational algebra simulators). Thus, we shall take SQL queries $Q_1 - Q_4$, apply the translation $Q \mapsto Q^1$ to their relational algebra equivalents, and then run the results of the translation as SQL queries. Expressing $Q_1-Q_4$ in relational algebra is standard. We remark though that traditional database texts tend to provide only simplified translations from SQL to algebra; for a full one, including nested subqueries that can themselves contain subqueries, a good source is [29] and we follow it here. As an example, consider query $Q_3$. Its relational algebra translation is $$\pi_{\text{orderkey}}(\text{orders} = \pi_{\theta}[\text{lineitem} \bowtie \text{orders}])$$ where the inner projection is on the attributes of relation $\text{orders}$ and the condition $\theta$ is $\text{linekey} = \text{orderkey} \land \text{suppkey} \neq \text{null_key}$. Before computing $Q^1_3$, we need to address one more technical issue. Often, at least in the theoretical literature, SQL nulls are identified with Codd nulls (non-repeating marked nulls). While in many cases this way of modeling SQL nulls is proper, it does not always work. The main issue is that comparing a null with itself results in true for Codd nulls, but unknown for SQL nulls. For instance, computing the self-join of $R = \{\text{null}\}$ by ```sql SELECT R1.A FROM R R1, R R2 WHERE R1.A = R2.A ``` results in the empty set, while for the Codd database $R = \{\bot\}$, the evaluation of $R \bowtie R$ is $\{\bot\}$. This tells us that in full generality we cannot guarantee correctness with the SQL implementation of nulls, as it is too coarse to see when null refers to the same value. However, the situation that causes this problem – when a nullable attribute is compared with itself in a self-join – is not very common, and does not affect us here as long as we make a minor adjustment of the translations $Q^+$ and $Q'$ to work correctly when evaluated as SQL queries. As expected, the adjustment occurs in selection conditions. For the $Q^+$ translation, we need to ensure that attributes compared for equality are not nulls (the existing translation $\theta^*$ already ensures that for disequality comparisons). For the $\theta^*$ translation in $Q'$, the situation is symmetric: we need to include the possibility of attributes being nulls for disequality comparisons (the existing translation $\theta^*$ already does it for equalities). That is, we change the translations as follows: $$(A = B)^* = (A = B) \land \text{const}(A) \land \text{const}(B)$$ and likewise for $(A = c)^*$ and $(A \neq c)^*$. For all the queries considered here, these ensure that $Q^+$ continues to under-approximate certain answers and $Q'$ continues to represent possible answers even when SQL evaluation rules are used for conditions with nulls. Applying then the translations to $Q_3$ gives us $Q^+_3$ as $$\pi_{o\_orderkey}(\text{orders} - \pi_{\sigma'_c}(\text{lineitem} \times \text{orders}))$$ where the inner projection is on the attributes of orders and $\theta^*$ is $l\_orderkey = o\_orderkey \land (l\_suppkey \neq \text{null} \lor (l\_suppkey))$. The left unification anti-semijoin produced by the translation is simplified to difference in $Q^+_3$ due to the following observation: if $R$ is a relation that has a key, and $S \subseteq R$, then $R \setminus S = R - S$. In the above query, this applies since the inner projection is contained in orders. Summing up, $Q^+_3$ is expressed in SQL as ```sql SELECT o\_orderkey FROM orders WHERE NOT EXISTS ( SELECT * FROM lineitem WHERE l\_orderkey = o\_orderkey AND ( l\_suppkey <> {s\_key} OR l\_suppkey IS NULL ) ) ``` In fact, a key feature of all the translations is that they change some conditions of the form $A=B$ to $A=B$ or $B$ IS NULL. In general – and this has nothing to do with our translation – when several such disjunctions occur in a subquery, they may not be handled well by the optimizer. One can in fact observe that for a query of the form ```sql SELECT * FROM R WHERE NOT EXISTS ( SELECT * FROM S, ..., T WHERE ( A=B OR B IS NULL ) AND ... AND ( X=Y OR Y IS NULL ) ) ``` the estimated cost of the query plan can be thousands of times higher than for the same query from which the IS NULL conditions are removed. One way to overcome this is quite simple and takes advantage of the fact that such disjunctions will occur inside NOT EXISTS subqueries. One can then we can propagate disjunctions in the subquery, which results in a NOT EXISTS condition of the form $\neg \exists \exists \phi_1(\bar{x})$, where each $\phi_i$ now is a conjunction of atoms. This in turn can be split into conjunctions of $\neg \exists \phi_i(\bar{x})$, ending up with a query of the form ```sql SELECT * FROM R WHERE NOT EXISTS ( SELECT * FROM S_i, i \in I_1 WHERE $\land_j \psi_i^j$ ) AND ... AND NOT EXISTS ( SELECT * FROM S_i, i \in I_k WHERE $\land_j \psi_i^k$ ) ``` where formulae $\psi_i^j$ are comparisons of attributes and statements that an attribute is or is not null, and relations $S_i$ for $i \in I_l$ are those that contain attributes mentioned in the $\psi_i^j$. ### Translating additional features Queries $Q_1$$-$$Q_4$ on which we test the approach go slightly beyond relational algebra as used in the previous section: they use $>$ and LIKE comparisons, and $Q_2$ refers to an aggregate subquery. As in the first two, looking at the SQL-adjusted translations of selection conditions, we can see that there is nothing special about (dis)equality. The same translations can be applied to other comparisons, and this is what we do. For the aggregate subquery, we just treat it as a black box, that is, we view the result of that subquery as a value $c$ and apply the translation to condition $c\_acctbal > c$. ### Experimental results Note that we measure relative performance of correct translations $Q^+_i$, that is, the ratio of running times of $Q^+_i$s and the original queries $Q_i$s. Intuitively, this ratio should not significantly depend on the size of generated instances. With this hypothesis in mind, we first report detailed results for the smallest allowed size of TPC-H instances (roughly 1GB). After that, we test our hypothesis using instances of 3GB, 6GB, and 10GB size, and show that indeed relative performances remain about the same for all instance sizes for queries $Q_1$, $Q_2$, and $Q_3$, although they can decrease slightly for $Q_4$ (we shall discuss this later). For the experiments described below, we use the DBGen tool of TPC-H to generate databases of about 1GB each, and then we populate them with nulls, depending on the prescribed null rate. For each null rate in the range 1%-5%, in steps of 1%, we generate 10 incomplete databases. On each such database, we instantiate our test queries 5 times with randomly generated values for their parameters, and we run each query instance 3 times. The results that we report are averages of those runs. Since we are interested in the cost of correctness, we report relative values of the parameters: one for the original query, and the other for the translation. Thus, if $t$ is the time it takes a query $Q$ to run, and $t^+$ is the running time of $Q^+$, we report the ratio $t^+/t$. In particular, staying close to 1 means that the price of correctness is low as correctness guarantees do not affect running time; if it drops below 1, we actually win by running a correct version of the query. Based on the experiments we conduct, we observe three types of behavior, reported in Figure 4. 1. For queries $Q_1$ and $Q_3$, the price of correctness is negligible for most applications: under 4% for both. 2. For query $Q_2$, the translation with correctness guaranteed is significantly faster than the original query; in fact it is more than 3 orders of magnitude faster on average. Note that the relative performance scale in the graph for $Q^+_2$ ranges from $2 \cdot 10^{-4}$ to $8 \cdot 10^{-4}$. 3. For query $Q_4$, the behavior is the worst among those we observed, as the running time of $Q^+_4$ almost doubles the running time of $Q_4$. This is still tolerable though if correctness of results is very important. ### Larger database instances The previous results were obtained for the smallest allowed TPC-H instances. As we explained, since we measure relative performance, we con- jected that the exact size of the instance should not have much impact: running times for both $Q_i$s and $Q^+_i$s will in- crease, but proportionally so. To confirm this experimen- tally, we generated instances of sizes 3GB, 6GB, and 10GB, and ran similar tests (with fewer test runs for larger in- stances, as running times increased significantly). Our re- sults, summarized in Table 1, validate our conjecture com- pletely for queries $Q_1$, $Q_2$, and $Q_3$, as relative performances indeed change very little. For $Q_4$, we observe a decrease in performance from roughly half the speed of the original query for 1GB databases to one quarter of the speed for 10GB databases; we shall comment on this below. Before analyzing these results, we address the standard measures for evaluating the quality of approximation algo- rithms, namely precision and recall. The first refers to the percentage of correct answers given. With the correctness guarantees proven in the previous section, we can thus state that precision of our algorithms is 100%. Recall refers to the fraction of relevant answers returned. In our case, we can look at certain answers returned by the standard SQL evaluation of a query $Q$, and see how many of them are re- turned by $Q^+$. The ratio of those is what we mean by recall in this scenario. We saw that, in some artificial examples, $Q^+$ may miss several, or even all, certain answers returned by $Q$. Thus, we cannot state a theoretical bound on the recall, but we can see what it is in the scenarios represented by our test queries. For this, one needs either highly intractable algo- rithms for computing certain answers, or at least algorithms for identifying false positives. The latter we gave in Sec- ction 4 for the SQL evaluation of $Q_1—Q_4$, and tested them on smaller TPC-H instances generated by DataFiller. Thus, we ran queries $Q^+_i$s (modified versions for $Q_2$ and $Q_4$) on those smaller instances and observed that they returned precisely the answers to the $Q_i$s except false positive tuples. That is, for those instances, the recall rate was 100%, and we did not miss any certain answers. Discussion We now discuss the factors that cause the behavior reported in Figure 4. We start with queries $Q_1$ and $Q_2$, whose be- behavior is quite similar. Note that the key change that our translation introduces is the change of comparisons $A \ op \ B$ to $(A \ op \ B) \ OR \ IS \ NULL$ inside correlated NOT EXISTS subqueries. The number of such disjunctions is small, and they are well handled by the optimizer, resulting in small over- heads. For $Q_3^+$, these overheads get lower as the null rate gets higher. This is most likely due to the fact that with a higher null rate it is easier to satisfy the IS NULL conditions in the WHERE clause of the NOT EXISTS subquery. As a re- result, a counterexample to the NOT EXISTS subquery can be found earlier, resulting in an overall faster evaluation. For query $Q_2$, the translation is similar, but there is one big difference: after we split the disjunction in the corre- lated NOT EXISTS subqueries, as explained earlier, one of the resulting NOT EXISTS subqueries becomes decorrelated. It simply tests for the existence of nulls in the attribute o_custkey of orders, and once it finds it the evaluation of the entire query ends, as we know that the result will be empty. The original query, on the other hand, spends most of its time looking for incorrect answers: this is the query with a rate of false positive answers close to 100%. Hence, in this case, the translation $Q_2^+$ not only ensures correctness, but also speeds up the execution time by a factor of over $10^2$, as it is able to detect early that the correct answer is empty. In fact, as instances grow larger, one wins even more by using the correct query $Q_2^+$, as the original $Q_2$ is forced to spend more time looking for incorrect answers. Query $Q_4$ is the hardest one to deal with. Without split- ting the OR conditions, PostgreSQL produces astronomical costs of query plans, as it resorts to nested-loop joins, even for large tables (this is due to the fact that it under-estimates the size of joins, which is a known issue for major DBMSs [18]). Hence the direct translation of this query requires some tuning. This is achieved in two steps. First, we split the disjunctions into several NOT EXISTS conditions, as ex- plained earlier. Even then, NOT EXISTS subqueries have nested EXISTS subqueries, each of them appearing twice. We define those queries as views (using WITH) and then replace subqueries with references to those views. These modifica- tions are sufficient to make the optimizer produce better es- timates and a reasonable query plan, which runs at roughly half the speed of the original query (for 1GB databases). What makes the performance of $Q_4$ the worst of the four queries is that it is the only one that has a multi-way join in the NOT EXISTS subquery; all others have no joins in such subqueries at all. This means that absolute running times are significantly higher for $Q_4$ than for other queries. The translation has four correlated subqueries, three of which use joins that involve the largest lineitem relation, which accounts for the decrease in relative performance (as the original query has only one multi-way join subquery). Of the course the need to have these multiple subqueries has arisen from the inability of the optimizer to handle disjunctions with IS NULL conditions. We believe that this problem may be overcome with a proper implementation of marked nulls (see additional comments in Section 8). Conclusions Our main conclusion is that it is practically feasible to mod- ify SQL query evaluation over databases with nulls to guar- antee correctness of its results. This applies to the setting where nulls mean that a value is missing, and the fragment of SQL corresponds to first-order queries. This could not be achieved with the theoretical solutions presented earlier [22] and required new ways of modifying SQL queries. Depen- ding on the exact translation involved, we saw queries running at roughly half the speed in the worst case, or almost $10^3$ times faster in the best case. For several queries, the over- head was small and completely tolerable, under 4%. With these translations, we also did not miss any of the correct answers that SQL evaluation returned. 8. FUTURE WORK Given our conclusions that wrong answers to SQL queries in the presence of nulls are not just a theoretical myth – there are real world scenarios where this happens – and correctness can be restored with syntactic changes to queries at a price that is often tolerable, it is natural to look into the next steps that will lift our solution from the first-order fragment of SQL to cover more queries and more possible interpretations of incompleteness. We shall now discuss those. Bag semantics. SQL queries use multiset, or bag semi- tics, and handling duplicates is an important aspect of the language. However, at this point we do not even have a proper theory of certain answers for bag semantics, neither established notions that one can measure against, nor com- plexity results. We need to understand what the analog of cert$(Q, D)$ is for queries under bag semantics, and how to define $Q^+$ in that case. Aggregate functions. An important feature of real-life queries is aggregation; in fact it is present in most of the TPC-H queries. However, here our understanding of cor- rectness of answers is quite poor; SQL’s rules for aggrega- tion and nulls are rather ad-hoc and have been persistently criticized [7, 9]. Thus, much theoretical work is needed in this direction before practical algorithms emerge. There is a better understanding of aggregate queries in neighboring areas such as probabilistic databases [25, 27] or inconsistent databases [5], and this could serve as a starting point. Marked nulls. The translations $Q \mapsto Q^+, Q^+$ at the level of marked and Codd nulls, but SQL nulls fall a bit short of Codd nulls, not being able to compare a null with itself. While sample queries used here were not affected, some queries may be. Ideally, one would use marked nulls to overcome this problem. Marked nulls should also be used to overcome issues with OR IS NULL conditions, as the op- timizer will see them as usual disjunctions involving value comparisons. Marked nulls have been implemented in con- nection with data exchange systems [13, 24], and one has access to multiple querying scenarios involving marked nulls using schema mapping benchmarks [3, 6]; hence we intend to create new base types that use marked nulls and exper- iment with translations in that setting. If marked nulls are not available, we need to find the precise characterization of queries for which the translations proposed here restore correctness with SQL nulls. Incorporating constraints. In the definition of certain answers, we disregarded constraints, although every real-life database will satisfy some, typically keys and foreign keys. While constraints \( \psi \) can be incorporated into a query \( \phi \) by finding certain answers to \( \psi \rightarrow \phi \), for common classes of constraints we would like to see how to make direct adjustments to rewritings. We have seen one example of this: the presence of a key constraint let us replace \( R \bowtie S \) by \( R \rightarrow S \). We would like to automate such query transformations based on common classes of constraints. Other types of incomplete information. So far we dealt with missing-information nulls, but there are other interpretations. For instance, non-applicable nulls [20, 32] arise commonly as the result of outer joins. We need to extend the notion of correct query answering and translations of queries to them. One possibility is to adapt the approach of [21] that shows how to define certainty based on the semantics of inputs and outputs of queries. At the level of missing information, we would like to see whether our translations could help with deriving partial answers to SQL queries, when parts of a database are missing, as in [16]. Direct SQL rewriting. We have rewritten SQL queries by a detour via relational algebra. We should look into both running such queries directly on a DBMS (and perhaps take advantage of good properties of semijoins [17] that feature prominently in our translations), and into direct rewriting from SQL to SQL, without an intermediate language. Acknowledgments We are grateful to Marco Console for many helpful discussions during the early stages of this work, and to Chris Ré and the anonymous referees for their comments. Work partially supported by EPSRC grants J015377 and M025268. 9. REFERENCES APPENDIX We present the exact translations of the queries used in our experiments. Query $Q_3^+$ was already shown in Section 7. Queries $Q_1^+$, $Q_2^+$, and $Q_4^+$ are given below. Query $Q_1^+$ ```sql SELECT s_suppkey, o_orderkey FROM supplier, lineitem l1, orders, nation WHERE s_suppkey = l1.l_suppkey AND o_orderkey = l1.l_orderkey AND o_orderstatus = 'F' AND l1.l_receiptdate > l1.l_commitdate AND s_nationkey = n_nationkey AND n_name = $nation AND EXISTS (SELECT * FROM lineitem l2 WHERE l2.l_orderkey = l1.l_orderkey AND l2.l_suppkey <> l1.l_suppkey ) AND NOT EXISTS (SELECT * FROM lineitem l3 WHERE l3.l_orderkey = l1.l_orderkey AND ( l3.l_receiptdate > l3.l_commitdate OR l3.l_receiptdate IS NULL ) OR l3.l_commitdate IS NULL ) ``` Query $Q_2^+$ ```sql SELECT c_custkey, c_nationkey FROM customer WHERE c_nationkey IN ($countries) AND c_acctbal > (SELECT AVG(c_acctbal) FROM customer WHERE c_acctbal > 0.00 AND c_nationkey IN ($countries) ) AND NOT EXISTS (SELECT * FROM orders WHERE o_custkey = c_custkey ) AND NOT EXISTS (SELECT * FROM orders WHERE o_custkey IS NULL ) ``` Query $Q_3^+$ ```sql WITH part_view AS (SELECT p_partkey FROM part WHERE p_name IS NULL UNION SELECT p_partkey FROM part WHERE p_name LIKE '%$color%' ), supp_view AS (SELECT s_suppkey FROM supplier WHERE s_nationkey IS NULL UNION SELECT s_suppkey FROM supplier, nation WHERE s_nationkey = n_nationkey AND n_name = '$nation' ) SELECT o_orderkey FROM orders WHERE NOT EXISTS (SELECT * FROM lineitem, part_view, supp_view WHERE l_orderkey = o_orderkey AND l_partkey = p_partkey AND l_suppkey = s_suppkey ) AND NOT EXISTS (SELECT * FROM lineitem, supp_view WHERE l_orderkey = o_orderkey AND l_suppkey = s_suppkey AND EXISTS (SELECT * FROM part_view ) ) AND NOT EXISTS (SELECT * FROM lineitem, part_view WHERE l_orderkey = o_orderkey AND l_partkey IS NULL AND l_suppkey = s_suppkey AND EXISTS (SELECT * FROM supp_view ) ) AND NOT EXISTS (SELECT * FROM lineitem WHERE l_orderkey = o_orderkey AND l_partkey IS NULL AND l_suppkey IS NULL AND EXISTS (SELECT * FROM part_view ) ) AND EXISTS (SELECT * FROM supp_view ) ``` Query $Q_4^+$ ```sql WITH part_view AS (SELECT p_partkey FROM part WHERE p_name IS NULL UNION SELECT p_partkey FROM part WHERE p_name LIKE '%$color%' ) ```
{"Source-Url": "http://homepages.inf.ed.ac.uk/pguaglia/papers/pods16.pdf", "len_cl100k_base": 14965, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 55043, "total-output-tokens": 17577, "length": "2e13", "weborganizer": {"__label__adult": 0.00039577484130859375, "__label__art_design": 0.0004382133483886719, "__label__crime_law": 0.00045609474182128906, "__label__education_jobs": 0.003215789794921875, "__label__entertainment": 0.00012433528900146484, "__label__fashion_beauty": 0.0002313852310180664, "__label__finance_business": 0.0007047653198242188, "__label__food_dining": 0.0005588531494140625, "__label__games": 0.000766754150390625, "__label__hardware": 0.0010662078857421875, "__label__health": 0.0009145736694335938, "__label__history": 0.0004749298095703125, "__label__home_hobbies": 0.00017130374908447266, "__label__industrial": 0.0007901191711425781, "__label__literature": 0.0005936622619628906, "__label__politics": 0.0003514289855957031, "__label__religion": 0.0006279945373535156, "__label__science_tech": 0.2083740234375, "__label__social_life": 0.0001347064971923828, "__label__software": 0.0280914306640625, "__label__software_dev": 0.75048828125, "__label__sports_fitness": 0.0002363920211791992, "__label__transportation": 0.0007467269897460938, "__label__travel": 0.00025725364685058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64007, 0.01745]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64007, 0.39762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64007, 0.89901]], "google_gemma-3-12b-it_contains_pii": [[0, 4773, false], [4773, 11690, null], [11690, 14753, null], [14753, 19899, null], [19899, 24961, null], [24961, 29452, null], [29452, 34199, null], [34199, 40691, null], [40691, 47122, null], [47122, 49239, null], [49239, 56077, null], [56077, 61744, null], [61744, 64007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4773, true], [4773, 11690, null], [11690, 14753, null], [14753, 19899, null], [19899, 24961, null], [24961, 29452, null], [29452, 34199, null], [34199, 40691, null], [40691, 47122, null], [47122, 49239, null], [49239, 56077, null], [56077, 61744, null], [61744, 64007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64007, null]], "pdf_page_numbers": [[0, 4773, 1], [4773, 11690, 2], [11690, 14753, 3], [14753, 19899, 4], [19899, 24961, 5], [24961, 29452, 6], [29452, 34199, 7], [34199, 40691, 8], [40691, 47122, 9], [47122, 49239, 10], [49239, 56077, 11], [56077, 61744, 12], [61744, 64007, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64007, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
431e2646c0ad27c409c234afa89554a39e88d5ea
Memory-Efficient Database Fragment Allocation for Robust Load Balancing when Nodes Fail Stefan Halfpap* Hasso Plattner Institute, Potsdam, Germany stefan.halfpap@hpi.de Rainer Schlosser* Hasso Plattner Institute, Potsdam, Germany rainer.schlosser@hpi.de Abstract—Load balancing queries that access the same data fragments to the same node improves caching for a memory-efficient scale-out. However, to suitably allocate fragments to multiple nodes is a highly challenging problem, particularly when nodes might fail. The problem is to find a good balance between memory efficiency and allocating enough fragments to nodes to obtain robustness through load balancing flexibility. Existing allocation approaches are either not memory-efficient or result in load imbalances, both degrading cost/performance. In this paper, we present an optimal approach and a scalable heuristic, based on three mutually supportive linear programming models, to calculate memory-efficient fragment allocations that guarantee to distribute the workload evenly - even in the case of node failures. We demonstrate the applicability and the effectiveness of our three-step approach using numerical as well as end-to-end evaluations for TPC-H and TPC-DS workloads. We find that our robust solutions clearly outperform state-of-the-art heuristics by achieving a better workload distribution with even less memory. I. INTRODUCTION Scalability and robustness against node failures are indispensable for database systems running in the cloud. Both can be achieved with query load balancing and data replication. Using a naive load balancing approach, queries are distributed independently of the accessed data. This approach has several drawbacks. All nodes have to store or potentially load all data. Further, all nodes must apply all data modifications caused by inserts, deletes, or updates. Finally, queries are unlikely to profit from caching effects, because similar queries are not guaranteed to be assigned to the same replica. Query-driven workload distribution tackles this problem by load balancing queries to nodes of a cluster based on the accessed data. Such an approach is beneficial to optimize cost/performance [1], e.g., (i) in caching architectures, e.g., data marts [2] or mid-tier caches [3], (ii) for operator placement in distributed database systems, and (iii) for partially replicated database systems [4], [5]. In general, data can be allocated, i.e., stored or cached, at nodes such that the load can be evenly balanced, which is crucial for scalability, and data caching is optimized. However, in the presence of failures, in which the load of the failed node has to be distributed among the remaining nodes, memory-efficient data allocations may result in load imbalances, increased cache misses, or required reallocations. It is challenging to calculate memory-efficient allocations that guarantee an even workload distribution in failure cases, because we must consider potential failures of all nodes, at which different subsets of fragments are allocated and which are, thus, optimized for different subsets of queries. This paper presents a general approach to calculate memory-efficient data allocations that guarantee to distribute the workload evenly - even in cases of node failures - by linear programming (LP). Although the developed allocation concepts are general, we focus on partial replication as one specific and thoroughly evaluated use case. The use of LP enables transferring our approach to versatile allocation problems by adapting the optimization goals and constraints [6]. Partial replication [4], [5] is a cost- and cache-efficient implementation of primary replication, which is a common scale-out option for single-node database systems. All major relational database systems, e.g., Oracle, IBM DB2, Microsoft SQL Server, SAP HANA, PostgreSQL, and MySQL, support replication. Partial replication lowers the memory consumption and improves caching, but reduces the robustness of a cluster, e.g., the accessibility of data fragments and executability of queries in the case of potential node failures. Our contributions are: We present an optimal model and a scalable heuristic to calculate memory-efficient and robust fragment allocations that allow to evenly balance workloads - particularly in the case of potential node failures. Our heuristic (see Figure 2) exploits the optimal LP model for decomposed subproblems and uses minimal data enhancements to guarantee a perfect load balance. We verify the effectiveness of our models using theoretical and end-to-end evaluations for TPC-H and TPC-DS. We show that our approach finds allocations that outperform state-of-the-art approaches, cf. [5], [7], in both memory consumption and worst-case query throughput in theory (see Table I and II) and practice (see Figure 3). II. ROBUST FRAGMENT ALLOCATION PROBLEM A. Problem Description We study a system with $K$ nodes, where one of them might fail. We assume a partitioned database with $N$ fragments. Input: The size of a fragment $i$ is $a_i$, $i = 1, ..., N$. We assume $K$ nodes, where data can be replicated. We assume a set of $Q$ (classes of) queries $j$, characterized by the accessed fragments $q_j \subseteq \{1, ..., N\}$, $j = 1, ..., Q$. Queries $j$ occur with frequency $f_j$, $j = 1, ..., Q$. Query costs $c_j$, $j = 1, ..., Q$, are independent of the executing node $k$. We use the total workload costs denoted by $C := \sum_{j=1,...,Q} f_j \cdot c_j$. © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. **Controls:** We use the following decision variables to decide (i) at which node to allocate which fragments and (ii) which query is executed on which node to which extent. \( x_{i,k} \in \{0, 1\}, i = 1, \ldots, N, k = 1, \ldots, K \), is allowed to be zero or one, indicating whether fragment \( i \) is allocated to node \( k \) (1) or not (0). \( y_{j,k} \in \{0, 1\}, j = 1, \ldots, Q, k = 1, \ldots, K \), is allowed to be zero or one, indicating whether query \( j \) can run on node \( k \) (1) or not (0). \( z_{j,k} \in \{0, 1\}, j = 1, \ldots, Q, k = 1, \ldots, K \), is indicating the workload share of query \( j \) executed on node \( k \). Further, by \( W/V \), we denote the replication factor, where the total amount of allocated data \( W := \sum_{i=1}^{N} \sum_{k=1}^{K} x_{i,k} \cdot a_i \) is normalized by the amount of relevant data \( V := \sum_{i \in \{1, \ldots, N\}} \sum_{j \in \{1, \ldots, Q\}} z_{j,k} > 0 (q_j) a_i \). **Constraints:** We have the following constraints: A query \( j \) can only be executed on node \( k \) if all relevant fragments are stored at node \( k \). If all \( K \) nodes work, each node’s load has to be \( 1/K \). If a node fails, it has to be possible to evenly balance the workload between all active nodes (\( K-1 \)). **Objective:** We seek to minimize the total amount of allocated data for a feasible workload distribution. Data has to be placed on multiple nodes such that all queries are still executable in failure cases without overloading single nodes. For an example with \( N=10 \) fragments and \( Q=5 \) queries, Figure 1 illustrates the solution for \( K=6 \) nodes (lower part). The table below each node \( k, k=1, \ldots, 6 \), specifies the query shares \( z_{j,k} \) without a node failure (column \( \emptyset \) ) and the query shares for each single node failure (columns 1 - 6). ### B. Robust State-of-the-Art Heuristics (i) **Greedy Approach** [5]. For allocations that do not base on LP, Rabl and Jacobson propose a greedy approach to complement a risk-neutral solution to a robust one: Queries are sorted by the size of the fragments they access in descending order. If a query is already executable by multiple nodes, nothing has to be done. Otherwise, the query is assigned to the node with the largest fragment overlap of already assigned queries, considering only the nodes that cannot already execute the query. The thereby added load of redundant query assignments in potential failure cases is not taken into account. The load balancing among nodes may be highly skewed in failure cases. To an extreme, a single node must take over the entire workload of the failed node and cannot pass anything of its regular workload to other nodes. (ii) **Chaining Approach** [7]. Allocations that guarantee an even workload distribution in failure cases can be constructed by applying the chained declustering strategy to a risk-neutral solution: Nodes are chained, forming a ring. The successor of each node is its backup. In addition to the fragments of the basic allocation, each (backup) node gets assigned all its predecessor’s fragments. As a result, the backup can take over the complete assigned regular load of its predecessor and pass an arbitrary share of its regular workload to its successor. (iii) **Adding Full Replicas.** To address cases with up to \( F \) node failures, one can use a basic solution for \( K-F \) nodes and add \( F \) full replicas. If the \( F \) full replicas fail, the \( K-F \) solution balances the load evenly by definition. Otherwise, remaining full replicas can take the workload of any failed partial node. However, full replicas can be memory-expensive. ### III. Optimal Robust Solution To solve the problem described in Section II-A, \( K \) potential failure cases and the non-failure case must be considered. By \( L \), we denote the highest workload share of all nodes in the regular case without a failure. Note, the limit \( L \) is determined by the allocation \( x_{i,k} \) and the assigned workload shares \( z_{j,k} \), see Section II-A. By \( L(-) \), we denote the highest (worst-case) workload share over all nodes that can occur in case of potential node failures. To optimize the limit \( L(-) \), we introduce the additional variables \( \tilde{z}_{j,k}(-), k(+) \in [0, 1] \), which describe the adjusted workload share of query \( j \) on a remaining node \( k(+) \in \{1, \ldots, K\} \setminus \{k(-)\} \) in case node \( k(-) = 1, \ldots, K \) fails. In the tables of Figure 1, \( \tilde{z} \) refers to the columns 1-6. To obtain solutions with workload limits \( L \) (regular case) and \( L(-) \) (failure case) that are as small as possible, in our LP model, we use a common penalty approach, where both limits are penalized via one penalty factor (denoted by \( \alpha > 0 \)). To emphasize the failure scenarios, the penalty on \( L(-) \) is chosen much larger (e.g., \( \times 100 \)) than the penalty for \( L \). Our LP for optimal robust solutions with single node failures reads as: \[ \begin{align*} \text{minimize} & \quad \sum_{i=1, \ldots, N} \sum_{k=1, \ldots, K} a_i \cdot x_{i,k} + \alpha \cdot L(-) + \alpha/100 \cdot L \\ \text{subject to constraints for the regular scenario:} \end{align*} \] \[ \sum_{k=1}^{K} z_{j,k} = 1, \quad j = 1, ..., Q \] and constraints for the failure scenarios: \[ \tilde{z}_{j,k(-),k(+)} \leq y_{j,k(+)}, \quad j = 1, ..., Q, k(-) = 1, ..., K \] \[ \frac{f_j \cdot c_j}{C} \tilde{z}_{j,k(-),k(+)} \leq L^{(-)}, \quad 1 \leq k(-), k(+) \leq K \] \[ \sum_{k=1}^{K} \tilde{z}_{j,k(-),k(+)}, j = 1, ..., Q, 1 \leq k(-) \leq K \] Objective (1) minimizes the replication factor \( W/V \) and contains a penalty term for the largest workload shares \( L \) and \( L^{(-)} \). (2) guarantees that a query \( j \) can only be executed on node \( k \) if all relevant fragments are available, see Section II. The cardinality term \(|q_j|\) expresses the number of fragments used in query \( j \). (3) ensures that a query \( j \) can only have a positive workload share on node \( k \) if it can be executed on node \( k \). If \( y_{j,k} = 0 \) then \( z_{j,k} = 0 \) follows; if \( y_{j,k} = 1 \) the shares \( z_{j,k} \) are not restricted. Note, (3) couples the binary variables \( y \) and the continuous variables \( z \) in a linear way. (4) guarantees that all nodes \( k \) do not exceed the workload limit \( L \). (5) ensures that a query’s workload shares on nodes \( k \) sum up to one. Constraints (6)-(8) ensure admissible allocations in case a node \( k(-)/k(+) = 1, ..., K \) is not working. The additional variables \( \tilde{z}_{j,k(-),k(+)}, j = 1, ..., Q \) express the optimal workload share of query \( j \) on the remaining nodes \( k(+) = \{1, ..., K\}\{k(-)\} \) if node \( k(-) \) does not work. If the penalty \( \alpha \) is sufficiently large, the solution guarantees the smallest possible emergency workload limit \( L^{(-)*} = 1/(K-1), K > 1 \). As all scenarios are mutually coupled, they cannot be optimized independently. We numerically evaluate the model for the TPC-H and TPC-DS setup described in Section V-A. Table I summarizes the results of our optimal solution (cf. \( W^{R*} \)) and compares the memory consumption and worst-case workload shares against the greedy robust heuristic (cf. \( W^{GR} \)) and the chaining approach (cf. \( W^{GR} \)) based on risk-neutral allocations of [5]. To calculate the worst-case workload limits \( L^{GR} \), we can use the LP (1) - (8) with a fixed fragment allocation \( \tilde{x} := \tilde{x}^{GR} \). We used the Gurobi solver (version 9.0.0) (single-threaded on a laptop) with \( \alpha := 1000 \), cf. (1). The results of Table I show that our solution outperforms the greedy robust heuristic [5] in both required memory (up to 24.9% less) and worst-case workload share (up to 42.5% lower). Note, \( W^{GR} < W^{R*} \) is only possible, because \( L^{max} \) is worse than the optimal limit \( L^{(-)*} \), cf. \( K = 4 \), TPC-H. Compared to the chained heuristic [7], our solution lowers the memory consumption more substantially (up to 41.8% less) while providing the same worst-case limits. As the complexity quickly increases, the LP is only applicable to small problems. **Remark 1** Existing robust allocation approaches have limitations. We find that with allocations of the greedy robust heuristic [5], the load balance can be uneven if nodes fail. In contrast, the chained heuristic [7] is memory-expensive. As the optimal solution does not scale, a heuristic is needed that combines both reliable robustness and memory efficiency. ### IV. Heuristic Three-Step Solution We propose a three-step heuristic to solve problem (1)-(8). **A. Step 1: Initial Recursive Chunking** In Step 1, we split the workload iteratively using a risk-neutral LP approach described in [4], forming a tree of chunks with similar queries, which access the same fragments. We specify the assigned workload of a chunk by the (fixed) parameters \( \bar{x}_{i,b} \), \( \bar{y}_{j} \in \{0, 1\}, \bar{z}_{j} \in \{0, 1\}, i = 1, ..., N, j = 1, ..., Q \). We only consider (i) involved fragments \( i \), where \( x_{i} \) is 1 and (ii) queries \( j \), where \( z_{j} > 0 \). The node’s workload share is denoted by \( \bar{w} := \sum_{j=1}^{Q} \bar{w}_{j} \) and compares the memory consumption and worst-case workload shares against the greedy robust heuristic (cf. \( W^{GR} \)) and the chaining approach (cf. \( W^{GR} \)). **Table I** <table> <thead> <tr> <th>( K )</th> <th>( W^{R*} )</th> <th>( L^{(-)*} )</th> <th>time ( W^{R*} )</th> <th>( W^{R*} )</th> <th>( L^{(-)*} )</th> <th>time ( W^{R*} )</th> </tr> </thead> <tbody> <tr> <td>3</td> <td>2.311</td> <td>0.500</td> <td>0.2 s</td> <td>-5.3%</td> <td>+0.0%</td> <td>-14.9%</td> </tr> <tr> <td>4</td> <td>2.651</td> <td>0.333</td> <td>1.5 s</td> <td>+3.7%</td> <td>-32.4%</td> <td>-15.0%</td> </tr> <tr> <td>5</td> <td>2.296</td> <td>0.250</td> <td>573 s</td> <td>-17.8%</td> <td>-4.0%</td> <td>-37.7%</td> </tr> <tr> <td>6</td> <td>3.153</td> <td>0.200</td> <td>29.1 s</td> <td>-5.5%</td> <td>-11.4%</td> <td>-23.3%</td> </tr> <tr> <td>7</td> <td>4.311</td> <td>0.143</td> <td>3507 s</td> <td>-0.9%</td> <td>-42.5%</td> <td>-21.9%</td> </tr> <tr> <td>8</td> <td>3.912</td> <td>0.125</td> <td>8551 s</td> <td>-2.5%</td> <td>-40.4%</td> <td>-25.1%</td> </tr> </tbody> </table> (b) TPC-DS ### B. Step 2: Robustness on the Final Level Assume, on the final decomposition level of Step 1, we end up with \( B \) chunks, where the data allocation for each chunk \( b, b = 1, ..., B \), with weights \( \bar{w}_{b} = n_{b}/K \) is characterized by (fixed) values \( \bar{x}_{i,b}, \bar{y}_{j,b}, \bar{z}_{j,b} \). In Step 2, we proceed as follows: For each chunk $b$, $b = 1, ..., B$, we solve the problem (1)-(8) for the corresponding (smaller) inputs. Figure 2 shows an example of Step 2, which splits $B=2$ chunks in overall $K=6$ nodes (three nodes per chunk). The table below each node $k$ specifies the regular workload distribution $z_{j,k}$ (cf. column $\mathcal{O}$) and six individual emergency load distributions $\tilde{z}_{j,k(\cdot),k(\cdot)}$ (cf. columns 1-6) for each potential node failure $k(\cdot) = 1, ..., 6$. After Step 2, the allocation is such that the emergency load distributions are only affected for failures of nodes from the same chunk (cf. Chunk 1). In such cases (e.g., $k(\cdot) = 2$), the workload distribution within the affected Chunk 1 is reorganized by evenly balancing the load between the remaining two nodes Node 1 and 3 (load 1/4). The other chunk’s nodes retain their regular workload distribution (load 1/6). ### Step 3: Optimal Fragment Enhancements The final Step 3 seeks to enrich the fragment allocation derived in Step 2 to obtain a perfect load balancing over all nodes in the case of any node failure. The goal is to use the minimal amount of additional data to guarantee the best possible workload limit ($L_{\text{max}}^{(-)} = 1/(K - 1)$). Let $x_{i,k}^{(f)} \in \{0, 1\}$ denote the allocation of Step 2, i.e., whether fragment $i = 1, ..., N$ is assigned to node $k = 1, ..., K$. We consider this allocation as fixed. In the LP (1)-(8), we use the new decision variables $x_{i,k}^{(e)} \in \{0, 1\}$ to decide whether to also add fragment $i$ to node $k$. This variable $x_{i,k}^{(e)}$ is used in the objective (1) instead of $x_{i,k}$. Further, we add the constraints $$x_{i,k}^{(f)} + x_{i,k}^{(e)} = x_{i,k}, \quad i = 1, ..., N, k = 1, ..., K$$ (9) to define the actual fragments as the compound of fixed and new fragment. Compared to (1)-(8), the modified LP for Step 3 is of much lower complexity, as $x_{i,k}^{(f)}$ are given parameters, i.e., those fragments cannot be removed. Further, the freedom of the enhancement variables $x_{i,k}^{(e)} = 0$ is limited, as (9) implies $x_{i,k} = 0$ for all $x_{i,k}^{(f)} = 1$. The other variables $x, y, z, \text{ and } \tilde{z}$ are of auxiliary character, as they are governed by $x_{i,k}^{(e)}$. In Step 3, comparably little data has to be assigned in total, because the allocations $x_{i,k}^{(f)}$ derived in Step 2 have the following beneficial properties: Assume an arbitrary solution of Step 2 with two chunks, say chunk $C_1$ and $C_2$. To obtain a perfect load balancing if a node of $C_1$ fails, the nodes of $C_2$ have to take additional load of $C_1$. However, as $C_2$ only has to be able to take some arbitrary load of $C_1$, it is sufficient to look for arbitrary nodes of $C_2$ that can be efficiently completed in this regard via additional fragments. Due to this flexibility, typically only little additional data is necessary. Finally, if one node of $C_1$ fails, the remaining nodes of $C_1$ as well as the nodes of $C_2$ can easily flexibly compensate the additional load, as within Step 2 all chunks are exactly optimized for such scenarios. Hence, for multiple chunks, it is sufficient when each chunk can take and pass enough load to one other chunk, and all are connected. In Figure 2, Node 4 is enhanced with three fragments (2-4) for $q_1$, and three fragments (7-9) for $q_3$ are added to Node 6. After Step 3, we obtain that, whatever node fails, a perfect workload distribution can always be achieved (load 1/5). The final replication factor $W^{R}/V = 3.1$ is close to the optimal solution $W^{R_{\ast}}/V = 2.8$ (see Figure 1). Note, base allocations $x_{i,k}^{(f)}$ are a prerequisite for the applicability of Step 3, as without a suitable (already robust) backbone solution, the LP (1) - (8) might be too complex. ### Evaluation After a description of our end-to-end evaluation setup, we compare our approaches against the results of [5] and [7]. #### Setup and Model Input We set up a replicated PostgreSQL cluster with 16 nodes for running TPC-H and TPC-DS queries with scale factor 1. In the following, we describe how we obtained the model inputs. For TPC-H $(Q = 22)$ and TPC-DS $(Q = 99)$, we modeled query costs $c_j$ as average processing time of query $j$ with random template parameters, $j = 1, ..., Q$. We deployed single-column <table> <thead> <tr> <th>$K$</th> <th>$W^R_H$</th> <th>$L^{(-)}_{max}$</th> <th>time $W^R_H$</th> <th>$W^R_C^R$</th> <th>$L^{(-)}_{max}^{C^R(-)}$</th> <th>$W^R_C^R$</th> </tr> </thead> <tbody> <tr> <td>8</td> <td>4+4</td> <td>3.947 0.143</td> <td>5.5 s</td> <td>-45.5%</td> <td>-42.5%</td> <td>-16.9%</td> </tr> <tr> <td>9</td> <td>3+3+3</td> <td>4.305 0.125</td> <td>27.2 s</td> <td>-47.2%</td> <td>-40.4%</td> <td>-17.5%</td> </tr> <tr> <td>10 (2)</td> <td>5+5</td> <td>4.481 0.125</td> <td>14.0 s</td> <td>-40.4%</td> <td>-12.9%</td> <td>-19.5%</td> </tr> <tr> <td>10 (2)</td> <td>5+5</td> <td>4.524 0.111</td> <td>14.8 s</td> <td>-40.5%</td> <td>-22.6%</td> <td>-18.8%</td> </tr> <tr> <td>11</td> <td>6+5</td> <td>4.611 0.100</td> <td>42.7 s</td> <td>-55.0%</td> <td>-30.3%</td> <td>-19.6%</td> </tr> <tr> <td>12</td> <td>6+6</td> <td>4.982 0.091</td> <td>27.4 s</td> <td>-31.3%</td> <td>-36.7%</td> <td>-19.5%</td> </tr> <tr> <td>13</td> <td>7+6</td> <td>5.430 0.083</td> <td>16.2 s</td> <td>43.8%</td> <td>-41.9%</td> <td>-13.5%</td> </tr> <tr> <td>14</td> <td>5+5+4</td> <td>5.396 0.077</td> <td>12.2 s</td> <td>-7.2%</td> <td>-29.8%</td> <td>-21.7%</td> </tr> <tr> <td>15</td> <td>5+5+5</td> <td>5.790 0.071</td> <td>6.9 s</td> <td>-1.0%</td> <td>-28.1%</td> <td>-16.4%</td> </tr> <tr> <td>16 (2)</td> <td>8+8</td> <td>6.027 0.071</td> <td>39.3 s</td> <td>-5.7%</td> <td>-24.5%</td> <td>-37.7%</td> </tr> <tr> <td>16 (2)</td> <td>8+8</td> <td>6.105 0.067</td> <td>151 s</td> <td>-2.5%</td> <td>-32.8%</td> <td>-19.0%</td> </tr> </tbody> </table> (a) TPC-H <table> <thead> <tr> <th>$K$</th> <th>$W^R_C^R$</th> <th>$L^{(-)}_{max}^{C^R(-)}$</th> <th>time $W^R_C^R$</th> </tr> </thead> <tbody> <tr> <td>5</td> <td>3+2</td> <td>2.443 0.250</td> <td>25.8 s</td> </tr> <tr> <td>6</td> <td>3+3</td> <td>2.550 0.200</td> <td>24.3 s</td> </tr> <tr> <td>7</td> <td>4+4</td> <td>2.624 0.167</td> <td>19.9 s</td> </tr> <tr> <td>8</td> <td>4+4</td> <td>2.683 0.143</td> <td>18.7 s</td> </tr> <tr> <td>9</td> <td>3+3+3</td> <td>3.017 0.125</td> <td>16.3 s</td> </tr> <tr> <td>10</td> <td>4+3+3</td> <td>3.102 0.111</td> <td>14.8 s</td> </tr> <tr> <td>11</td> <td>4+4+3</td> <td>3.175 0.100</td> <td>8.8 s</td> </tr> <tr> <td>12</td> <td>4+4+4</td> <td>3.274 0.091</td> <td>86.2 s</td> </tr> <tr> <td>13</td> <td>4+3+3+3</td> <td>3.352 0.083</td> <td>93.3 s</td> </tr> <tr> <td>14</td> <td>4+4+3+3</td> <td>3.636 0.077</td> <td>226.5 s</td> </tr> <tr> <td>15 (2)</td> <td>4+4+4+3</td> <td>3.507 0.100</td> <td>114 s</td> </tr> <tr> <td>15 (2)</td> <td>5+5+5</td> <td>3.430 0.083</td> <td>895 s</td> </tr> <tr> <td>16</td> <td>4+4+4+4</td> <td>3.682 0.071</td> <td>124 s</td> </tr> <tr> <td>16</td> <td>4+4+4+4</td> <td>4.044 0.067</td> <td>209 s</td> </tr> </tbody> </table> (b) TPC-DS Indices on all primary key columns. As processing TPC-H query 17 and 20 exceeded the set timeout of 120 s, we omitted them in our allocations. Likewise, we omitted the five most expensive TPC-DS queries, resulting in 94 remaining queries. We use vertical partitioning with each column as an individual fragment. Fragment/column sizes $a_i$, $i = 1, \ldots, N$, for TPC-H ($N = 61$) and TPC-DS ($N = 425$) are modeled by using the PostgreSQL function pg_column_size(). In case there is an index on an attribute, the associated fragment size is increased by the index size. All model inputs to reproduce the calculation of all allocations are available online [8]. B. Numerical Evaluation of Step 2 and Step 3 Table II compares the results of our robust heuristic (cf. $W^R_H$) against the greedy [5] (cf. $W^R_G^R$) and chaining approach [7] (cf. $W^R_C^R$). The results verify that also large problems can be addressed in a reasonable time. For TPC-H (Table IIa), we find that our worst-case limits $L^{(-)}_{max}$ are clearly better (up to 42.5%) compared to the greedy approach, although our heuristic requires similar or even less memory. To illustrate the impact of Step 3, we also include results obtained after Step 2 (indicated by (2)), cf. $K = 10, 16$. We observe that the amount of data enhancements to realize an optimal load balancing is small. After Step 3, for all $K$ the limit $L^{(-)}_{max}$ coincides with the optimal lower bound $L^{(-)}_* = 1/(K - 1)$. In contrast, in some settings (e.g., TPC-H, $K = 8, 9, 13$) the limit $L^{GR}^{(-)}_{max}$ of [5] can be close to the worst case $2 \times L^*$ (see, e.g., TPC-H, $K = 8$) where $L^{GR}^{(-)}_{max} = 0.248$ and $2 \times L^* = 2/K = 0.250$, which reflects the case in which one node has to (i) additionally take the entire workload of the failure node ($1/K$) and (ii) cannot pass some of its regular workload ($1/K$) to other nodes. Compared to the chaining approach, our three-step approach requires less memory (up to 21.7%) for all $K \geq 3$, while providing the same worst-case limits, i.e., $L^{GR}_{max} = L^{max}_{GR}$. For TPC-DS (Table IIb), we observe that our heuristic constantly requires significantly less memory (up to 19.7%) than the greedy approach while still obtaining better (optimal) worst-case limits (up to 31.5%). Again, the data enhancements and additional runtime of Step 3 are small. Compared to optimal solutions (cf. Table IIb, $K = 5, 6, 7$) our heuristic’s memory consumption is near-optimal and, most importantly, remains applicable to larger numbers $K$. Compared to the chaining approach with the same worst-case limits, our three-step approach requires 30-38% less memory for all $K \geq 5$. To avoid long runtimes, we can use Step 2 with smaller chunks (see $K=4+4+4+4+3$ vs. $K=5+5+5$, TPC-DS), which can be derived significantly faster (111 s vs. 895 s) while the required memory is only slightly (2%) higher. Recall, the greedy approach [5] does not optimize the limit $L^{GR}^{(-)}_{max}$, and is, hence, to some extent unforeseeable regarding its quality (see, e.g., $K=7$ vs. $K=12$, TPC-DS). In contrast, the chaining approach [7] does not focus on memory efficiency. Chaining entire (and mostly unrelated) nodes wastes optimization potential compared to both adding robustness per chunk and more fine granular fragment enhancements (Step 3). Remark 2 The evaluation shows that our three-step heuristic clearly outperforms both robust approaches [5] and [7] when comparing (i) memory efficiency (up to 38% better) and (ii) worst-case workload limits (up to 42% better). Further, even large problems can be solved in a reasonable time. The quality of our results is based on the mutually supportive interplay of the three LP models, cf. Step 1 - 3: Step 1 reduces the complexity of the initial problem using a memory-efficient workload decomposition. Exploiting the LP (1) - (8), Step 2 effectively adds robustness within the final chunks of Step 1. Finally, based on Step 2’s allocation, Step 3 guarantees a perfect load balance using optimal data enhancements. C. End-To-End Evaluation We evaluate the TPC-H throughput of allocations in a PostgreSQL cluster. The number of benchmark streams $S = 8 \cdot K$ (representing users) depends on the cluster size $K$. A central dispatcher maintains a query queue for each replica. For full replication, queries from stream $s$ are added to the queue of node $k = 1 + (s - 1) \mod K$. For partial replication, queries are added to a queue of a node which stores all relevant fragments to process the query (considering the costs of queued and currently processed queries). A fixed number of connections per replica is used to query the database, removing queries from the according queue. For $K=16$, there are $8 \cdot 16 = 128$ clients, resulting in 128 active queries at a time. We run an throughput (queries per seconds) ![chart](image) (a) Regular and worst-case throughput (b) Throughput of all single failure of all scenarios for $K = 2, \ldots, 16$. scenarios for $K = 13$. Fig. 3. End-to-end TPC-H throughput of allocations in a PostgreSQL cluster. experiment with each setting for 620 seconds, executing more than 8000 TPC-H queries. We started measuring the query throughput after a 180 seconds warm-up phase. For each number of nodes $K$ and each allocation, we evaluate $K + 1$ scenarios: the case with no failure and $K$ scenarios in which node $k$ failed, i.e., node $k$ is not used for query processing. Figure 3a shows the query throughput without a failure (regular) and the measured minimum (worst-case) performance of all failure scenarios. The end-to-end results correspond to the numerical results of Table Ia and IIa: (i) The optimal robust allocations provide the overall best results, having high throughput despite arbitrary single-node failures with the lowest memory consumption (see Table Ia). (ii) Using slightly more data, our three-step approach also allows calculating allocations for large cluster sizes with the same throughput properties as the optimal solution. (iii) Our allocations provide a clearly higher worst-case throughput for larger $K$ (up to $+91\%$ for $K=8$) than solutions by [5], which have a similar memory consumption (see Table IIa). (iv) Recall, allocations of the chaining approach require significantly more memory (up to $+28\%$ for $K=14$) than our approach. Figure 3b visualizes the throughput in all failure cases for the cluster size $K = 13$. The chaining and our robust approach provide high and stable throughput in all failure cases, whereas the throughput of Rabl and Jacobsen’s allocations may drop significantly when a specific node (e.g., 4 or 9) fails. Remark 3 The conducted end-to-end evaluation (Figure 3) verifies that the theoretically obtained results for our fragment allocations, i.e., memory efficiency and optimal worst-case workload limits, also hold in deployed systems. VI. RELATED WORK Özsu and Valduriez give an overview of allocation problems in the context of distributed database systems [6]: Allocation problems differ in (i) optimization goals, e.g., performance, costs, and reliability, and (ii) constraints based on the system assumptions. Because problem formulations are often proven to be NP-hard, a lot of research tries to find good heuristic solutions. As optimization goals and constraints differ, heuristics are often tied to specific allocation problems. Our optimization goal and the constraints are similar to the work of Rabl and Jacobsen [5]. We maximize throughput by balancing the load and minimize the cluster’s overall memory consumption. Rabl and Jacobsen showed that partial replication does not only reduces the memory consumption for read-intensive workloads, but also scales better for write-intensive workloads, because replicas have to modify only stored fragments. In contrast to the robust extension in [5], our approach decomposes the problem, adds robustness, and enhances the solution, using linear programming for all steps. Archer et al. address a similar (coupled data and query assignment) problem [9]. They evenly load balance queries for web search containing multiple terms, which correspond to the fragments in our model. In contrast to our model, data of assigned terms (fragments) can be loaded on demand. In contrast, the allocation problem tackled by Ghosh et al. [10] has no coupling, which reduces the complexity of the problem significantly. They replicate fragments according to the access rate and balance the number of fragments per node. Further, they focus on a dynamic setting [11], in which fragments and queries change over time. Allocation problems with uncertain workloads are heuristically addressed in [12]. VII. CONCLUSIONS This paper investigated a problem to evenly balance a workload among nodes to maximize throughput while minimizing the cluster’s overall memory consumption. Taking single node failures into account, data fragments have to be assigned to nodes such that a given workload can be evenly balanced in any scenario. Besides an LP-based optimal solution, which is only applicable to small problems, we presented a scalable three-step heuristic. We compared our robust approaches with the state-of-the-art techniques [5] and [7] for the TPC-H and TPC-DS workload. Using numerical and end-to-end evaluations, we showed that our three-step heuristic calculates close to optimal allocations and outperforms current techniques by achieving better combinations of worst-case throughput and required memory consumption, e.g., increasing the end-to-end throughput by up to 91% compared to approach [5], and using up to 38% less memory than approach [7]. REFERENCES
{"Source-Url": "https://hpi.de/plattner/people/senior-researchers/dr-rainer-schlosser/Document/import_epic/2021032373.pdf/91291f68858b3713431eb3fb6e0bf2e2.html?cHash=9f36627ed88aae89b96ced037cdb1410", "len_cl100k_base": 9533, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 27769, "total-output-tokens": 10376, "length": "2e13", "weborganizer": {"__label__adult": 0.00036525726318359375, "__label__art_design": 0.0004014968872070313, "__label__crime_law": 0.00040650367736816406, "__label__education_jobs": 0.0013494491577148438, "__label__entertainment": 0.0001513957977294922, "__label__fashion_beauty": 0.00018346309661865232, "__label__finance_business": 0.001445770263671875, "__label__food_dining": 0.0004546642303466797, "__label__games": 0.0007395744323730469, "__label__hardware": 0.00234222412109375, "__label__health": 0.00099945068359375, "__label__history": 0.0004680156707763672, "__label__home_hobbies": 0.00014507770538330078, "__label__industrial": 0.0010175704956054688, "__label__literature": 0.0002980232238769531, "__label__politics": 0.0002989768981933594, "__label__religion": 0.0004551410675048828, "__label__science_tech": 0.472412109375, "__label__social_life": 0.00010311603546142578, "__label__software": 0.029388427734375, "__label__software_dev": 0.4853515625, "__label__sports_fitness": 0.00026488304138183594, "__label__transportation": 0.0006628036499023438, "__label__travel": 0.0002899169921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34497, 0.07316]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34497, 0.24994]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34497, 0.84818]], "google_gemma-3-12b-it_contains_pii": [[0, 5900, false], [5900, 11179, null], [11179, 16280, null], [16280, 20627, null], [20627, 28037, null], [28037, 34497, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5900, true], [5900, 11179, null], [11179, 16280, null], [16280, 20627, null], [20627, 28037, null], [28037, 34497, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34497, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34497, null]], "pdf_page_numbers": [[0, 5900, 1], [5900, 11179, 2], [11179, 16280, 3], [16280, 20627, 4], [20627, 28037, 5], [28037, 34497, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34497, 0.25342]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
5c67580a0ce3f2bd0ece0d89b62cc99b87803c0b
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-540-39650-5_17.pdf", "len_cl100k_base": 10199, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 44933, "total-output-tokens": 12042, "length": "2e13", "weborganizer": {"__label__adult": 0.0005822181701660156, "__label__art_design": 0.00061798095703125, "__label__crime_law": 0.005283355712890625, "__label__education_jobs": 0.0013170242309570312, "__label__entertainment": 0.00018453598022460935, "__label__fashion_beauty": 0.00026154518127441406, "__label__finance_business": 0.000579833984375, "__label__food_dining": 0.0004320144653320313, "__label__games": 0.0016498565673828125, "__label__hardware": 0.0028553009033203125, "__label__health": 0.00103759765625, "__label__history": 0.0004973411560058594, "__label__home_hobbies": 0.00018656253814697263, "__label__industrial": 0.000957012176513672, "__label__literature": 0.0005102157592773438, "__label__politics": 0.0007653236389160156, "__label__religion": 0.0005626678466796875, "__label__science_tech": 0.406005859375, "__label__social_life": 0.00019657611846923828, "__label__software": 0.053253173828125, "__label__software_dev": 0.521484375, "__label__sports_fitness": 0.0003032684326171875, "__label__transportation": 0.0005583763122558594, "__label__travel": 0.00019097328186035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47758, 0.02125]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47758, 0.48962]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47758, 0.89569]], "google_gemma-3-12b-it_contains_pii": [[0, 2302, false], [2302, 5303, null], [5303, 8434, null], [8434, 11120, null], [11120, 14248, null], [14248, 17075, null], [17075, 19929, null], [19929, 22544, null], [22544, 25566, null], [25566, 27840, null], [27840, 30540, null], [30540, 33132, null], [33132, 35891, null], [35891, 38610, null], [38610, 41385, null], [41385, 44648, null], [44648, 47023, null], [47023, 47758, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2302, true], [2302, 5303, null], [5303, 8434, null], [8434, 11120, null], [11120, 14248, null], [14248, 17075, null], [17075, 19929, null], [19929, 22544, null], [22544, 25566, null], [25566, 27840, null], [27840, 30540, null], [30540, 33132, null], [33132, 35891, null], [35891, 38610, null], [38610, 41385, null], [41385, 44648, null], [44648, 47023, null], [47023, 47758, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47758, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47758, null]], "pdf_page_numbers": [[0, 2302, 1], [2302, 5303, 2], [5303, 8434, 3], [8434, 11120, 4], [11120, 14248, 5], [14248, 17075, 6], [17075, 19929, 7], [19929, 22544, 8], [22544, 25566, 9], [25566, 27840, 10], [27840, 30540, 11], [30540, 33132, 12], [33132, 35891, 13], [35891, 38610, 14], [38610, 41385, 15], [41385, 44648, 16], [44648, 47023, 17], [47023, 47758, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47758, 0.148]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
991924d2f9f6abf65a0db989a6841e83e1e62e90
Visual Mining of Power Sets with Large Alphabets Tamara Munzner† Qiang Kong† Raymond T. Ng† Jordan Lee† Janek Klawe†‡ Dragana Radulovic† Carson K. Leung†§ ABSTRACT We present the PowerSetViewer visualization system for the lattice-based mining of power sets. Searching for itemsets within the power set of a universe occurs in many large dataset knowledge discovery contexts. Using a spatial layout based on a power set provides a unified visual framework at three different levels: data mining on the filtered dataset, browsing the entire dataset, and comparing multiple datasets sharing the same alphabet. The features of our system allow users to find appropriate parameter settings for data mining algorithms through lightweight visual experimentation showing partial results. We use dynamic constrained frequent set mining as a concrete case study to showcase the utility of the system. The key challenge for spatial layouts based on power set structure is handling large alphabets, because the size of the power set grows exponentially with the size of the alphabet. We present scalable algorithms for enumerating and displaying datasets containing between 1.5 and 7 million itemsets, and alphabet sizes of over 40,000. Keywords frequent set, power set, visualization, interactive data mining 1. INTRODUCTION Human visualization can play a major role in knowledge discovery from large datasets (KDD). In this paper, we present a visualization system for lattice-based mining of power sets. Searching the power set, the set of all itemsets using the alphabet of items in a given universe, is a fundamental task in many KDD contexts [14]. Prime examples include association rules and frequent set methods [1], sequential patterns [2], decision trees [4], and data clustering [17]. Our PowerSetViewer (PSV) system advances the state of the art by providing scalability in the size of the alphabet. The power set grows huge as the alphabet size increases: a universe of only 24 items outstrips the number of pixels on the screen, and universes of over 32 or 64 items are difficult to even store in standard data formats. Using a spatial layout based on a power set presents algorithmic challenges, but provides a unified visual framework at three different levels: data mining on the filtered dataset, the entire dataset, and comparison between multiple datasets or data mining runs sharing the same alphabet. Users can find appropriate parameter settings for data mining algorithms quickly through lightweight visual experimentation showing how various parameter settings would create a filter for the mined data with respect to the entire dataset. PSV acts as a "windshield" to make the execution of the data mining algorithms transparent. When the mining algorithms are steerable, dynamic display of intermediate or partial results helps the user decide how to change the parameters settings of computation in midstream. In our unified framework, we can support setting the filter parameters to let all itemsets pass through, so that the entire input dataset is shown to the user. Another benefit of using the power set for spatial layout is that users can even meaningfully compare images that represent two different datasets that share the same alphabet, for example by comparing the distribution of purchases between two chain stores in different geographic regions. Our visualization system consists of two main parts, as shown in Figure 1: the power set visualization module, the visualizer; and the data mining engine, the miner. In the current version of our system, the data mining engine implements dynamic, constrained frequent set mining as a concrete case study. The appeal of constrained frequent set mining is well known [20, 8, 18]. One key problem that has not been addressed in previous work is how to support users in choosing and changing constraint thresholds and parameters. This unsolved problem makes dynamic constrained frequent set mining a perfect case study to showcase the power of our visualization system. While dynamic constrained frequent set mining affects the "look" of the visualization system shown here, the contributions of this paper are not in the domain of dynamic frequent set mining, which have been presented previously [18]. Our first contribution is a visualization module using a spatial layout based on power sets that scales in both alphabet size and number of itemsets, handling alphabets of more than 40,000 items and datasets of over 7 million itemsets. The visualization system provides users with an exploratory view of the full complexity of the actual distribution of the itemsets in a particular dataset with respect to the space of possibilities. Our second contribution is the two-part system which connects this visualization module to a dynamic frequent set mining server, showing how this visualization approach helps users exploit the power of steerability. The back-end data mining engine can act as a filter for datasets far larger than the 7 million itemset limit of the front-end visualization module. The ability of the visualization module to handle large alphabets, commensurate with what the data mining engine can support, provides the power to show the distribution of the filtered itemsets within the same visual space of all possibilities. Figure 1: PSV has a client-server architecture, with a visualizer that can show either the filtered results from the miner or the raw data directly. We begin the paper with a set of example scenarios of use in the domain of frequent set mining and a discussion of the features offered by the visualizer in Section 2. Section 3 covers related work. We present the key algorithms for supporting the huge size of the power set of a large universe in Section 4. Section 5 contains experimental results illustrating the scalability of our algorithms and a discussion of our design choices. We end with conclusions and future work in Section 6. 2. POWER SET VISUALIZATION PSV supports interactive data mining by displaying the sets of interest with respect to the the power set; that is, the space of all possible sets. Users can choose these sets of interest with by specifying constraints for which objects should be highlighted in the visualizer or filtered by the miner. 2.1 Example Scenarios We present four scenarios of using PSV features during a data mining task. These scenarios are built around a real course enrollment database, where an itemset is the set of courses taken by a student during a particular term. The 95,776 items cover the six terms of the academic years 2001, 2002, and 2003. The alphabet size, 4616, is the total number of courses offered. Our example user is an undergraduate course coordinator interested in finding sets of courses when taken together, so that she can minimize conflicts when scheduling the next year’s courses. Scenario 1: She first considers medium-sized courses, and decides to start with the enrollment threshold of 100. Her first constrained frequent set query is \( frequency \geq 0.005 \) and \( max(courseSize) \leq 100 \). She watches the visualizer display as it dynamically updates to show the progress of the miner, and notices from the sparseness of the display that her initial parameter setting might not be appropriate. Figure 2 Top Left shows the constrained frequent sets computed so far when she pauses the computation after 10% of the itemsets have been processed. The sets are shown as a distribution of small boxes ordered by cardinality, from singleton sets at the top to 5-sets on the bottom. Hereafter we use the convention of \( k \)-sets to denote sets of size \( k \). Each cardinality has a different background color, and within each cardinality the sets are enumerated in lexicographic order, as discussed in Section 4.1. The coordinator loosens the frequency constraint to .001, and tightens the enrollment parameter to 80 or fewer students before resuming the computation. Figure 2 Top Right shows the final result with the new constraints. After all ninety-six thousand itemsets have been processed by the miner, the largest constrained frequent itemset is a 9-set, whereas the the largest set in the partial result in Figure 2 Top Left is a 5-set. Scenario 2: The coordinator now returns to the question of what course size would represent her intuitive idea of “medium”. Instead of filtering the itemsets with the miner, she loads in the entire database in pass-through mode so that she can quickly explore by highlighting itemsets that satisfy different attribute constraints directly in the visualizer. She tries several values for the maximum enrollment constraint, and in less than a minute settles on the value of 100 as shown in Figure 2 Middle Left. Scenario 3: She continues by zooming in to 2-sets to investigate details that cannot be resolved from the overviews that she has seen so far, which show aggregate information about multiple sets if they fall into the same spatial region in the layout. When she zooms far enough in, each on-screen box in the zoomed-in region represents only a single set. The relative ordering of itemsets is preserved in both the horizontal and vertical directions. She can still see the highly aggregated information about 1-sets on top and higher cardinality sets on the bottom, so she can easily keep track of the relative position of the area that she has zoomed. She can browse many itemsets in a few seconds by moving the cursor over individual boxes to check the course names reported in the lower left corner of the display. Those highlighted sets give her ideas of which courses to avoid scheduling at the same time as CPSC 124. Scenario 4: Having found a good enrollment threshold of 100 that characterizes medium-sized courses, she is ready to investigate individual courses and whether the set of courses frequently taken together changes over time. Instead of looking at the combined data over all academic years, she selects only the 2001 data, and returns to using the miner to filter with the constraints of \( frequency \geq 0.001 \) and \( max(courseSize) \leq 100 \), and clicks on the box representing the 1-set CPSC 124. Figure 2 Bottom Left shows that this 1-set and all of its supersets are highlighted. In other words, she can see the upward closure property of the containment relation. Using lattice terminology, the highlighted elements... Figure 2: Scenarios for data mining with PSV Top: The user steers the miner by changing a constraint in midstream. **Top Left:** The user pauses after 10% of the itemsets are processed to loosen the frequency constraint and tighten other the parameters. **Top Right:** The view is considerably denser, and higher cardinality sets are shown, after all the itemsets are processed. Middle: The user loads in the entire raw dataset to find good parameter settings with lightweight experimentation, using the same unified framework as when the miner was filtering data. **Middle Left:** Courses with enrollment less than 100 are highlighted. **Middle Right:** The rubber-sheet navigation technique of stretching and squishing a box shows details. Bottom: Comparing different datasets that share the same alphabet. **Bottom Left:** CPSC 124 and its supersets are highlighted for the academic year 2001. **Bottom Right:** Highlighting the same configuration for 2003 allows the visual comparison between datasets. form a lower semi-lattice with CPSC 124 as the bottom element, and they satisfy all the specified constraints. The courses contained in these highlighted sets are the ones to avoid scheduling simultaneously with CPSC 124. She then starts up a second copy of PSV with the same configuration on the academic year 2003 data, as shown in Figure 2 Bottom Right. With the 2001 and 2003 displays side by side, she can quickly spot differences and mouseover those boxes to find the names of courses that became less popular to take with CPSC 124. 2.2 Features The features of PSV integrate the visual perception abilities of human users throughout the data mining process. 2.2.1 Client-Server Architecture PSV has a client-server architecture, as shown in Figure 1. The server is a steerable data mining engine, the miner, that is connected through sockets with a client visualization module, the visualizer, that handles graphical display. The visualizer client includes interface components for controlling both itself and the miner server. The client and server communicate with a simple text protocol: the client sends control messages to the server, including constraint settings, pause, play, and restart. The miner sends partial results to the visualizer as they are completed, allowing user monitoring of progress. The miner server is written in C, while the visualizer client is a Java program using hardware-accelerated OpenGL with the GL4Java bindings. 2.2.2 Visual Metaphor PSV uses the visual metaphor of accordion drawing [5], an information visualization technique first used for tree browsing and comparison [19]. This technique for exploring two-dimensional spatial layouts features rubber-sheet navigation [21] and guaranteed visibility [19]. Rubber-sheet navigation allows the user to select any rectangular area to stretch out, showing more detail there, and that action automatically squishes the rest of the scene. All stretching and squishing happens with smoothly animated transitions, so that the user can visually track the motions easily. Parts of the scene can become highly compressed, showing very high-level aggregate view in those regions. However, no part of the scene will ever slide out of the field of view, following the metaphor that the borders of the malleable sheet are nailed down. Although the absolute location of an itemset changes, the relative ordering of the itemset with respect to its neighbors is always preserved in both the horizontal and vertical directions. Accordion drawing allows interactive exploration of datasets that contain many more itemsets than the fixed number of pixels available in a finite display. A second critical property of accordion drawing is the guaranteed visibility of visual landmarks in the scene, even if those features might be much smaller than a single pixel. Without this guarantee, a user browsing a large dataset cannot know if an area of the screen is correctly blank because it is truly empty, or if it is misleadingly blank because itemsets in that region happen to be smaller than the available screen resolution. 2.2.2.1 Layout PSV introduces a novel layout that maps a related family of datasets, those sharing the same alphabet of available items, into the same absolute space of all possibilities. That space is created by enumerating the entire power set of a finite alphabet as a very long one-dimensional list, where every possible set has an index in that list. That linear list is wrapped scanline-style to create a two-dimensional rectangular grid of fixed width, with a small number of columns and a very large number of rows. We draw a small box representing a set if it is passed to the the visualizer by the miner, located at the position in the grid corresponding to its index in this wrapped enumeration list. Without guaranteed visibility, these boxes would be much smaller than pixels in the display for alphabets of any significant size because of the exponential nature of the power set. This guarantee is one fundamental reason why PSV can handle large alphabets. In areas where there is not enough room to draw one box for each set, multiple sets are represented by a single aggregate box. The color of this box is a visual encoding of the number of sets that it represents using saturation: objects representing few sets are pale, and those representing many are dark and fully saturated. Color is also used in the background to distinguish between areas where sets of different cardinality are drawn: those background regions alternate between four unobtrusive, unsaturated colors. The minimum size of boxes is controllable, from a minimum of one pixel to a maximum of large blocks that are legible even on high-resolution displays. In this layout, seeing visual patterns in the same relative spatial region in the visualization of two different datasets means they have similarities in their distribution in this absolute power set space. Side by side visual comparison of two different datasets sharing the same alphabet is thus a fruitful endeavor, as described in Scenario 4 above. 2.2.3 Constraints PSV allows the user to specify the following types of constraints for interactive constrained frequent-set mining: (a) **aggregation constraints** such as \( \text{max}(\text{courseSize}) \leq 100 \), which specifies that all courses in the itemset must not exceed 100 in student enrollment. Other forms of aggregations, such as minimum, sum, and average, are also allowed. The attribute \( \text{courseSize} \) is an auxiliary attribute associated with each item. Examples of other attributes includes the class average, the number of assignments, and so on; (b) **frequency constraint**, such as \( \text{frequency} \geq .0001 \); (c) **containment constraints**, which find all the sets that contain \( \text{any} \) of the specified items. In Scenario 4 above, the containment constraint finds all the sets containing the course CPSC 124. This allows the user to examine the parts of the lattice of interest. Constraints are processed at different locations within PSV: some can handled by both the visualizer and the miner, while others are only processed by the miner or only processed by the visualizer. Frequency constraints are computationally intensive, and are thus “pushed inside” to the miner, in order to provide as much pruning as possible. The miner can handle a combination of frequency and aggregation constraints. Sets that satisfy these miner constraints are sent to the visualizer for display. Scenario 1 above showcases the dynamic specification and processing of miner constraints. Specifying constraints for the miner is also a way to filter datasets larger than the current 7-million itemset capacity of the visualizer, which maintains all loaded itemsets in main memory to support fluid realtime exploration. The current proof-of-concept miner handles only a single aggregation constraint, but extending it to handle multiple aggregation constraints would be straightforward. Furthermore, it can be extended to support other more general types of constraints, such as wild card matching, considered in [18]. The visualizer supports aggregation and containment constraints by visually highlighting the matching sets from among those it has loaded. It can handle multiple simultaneous constraints, coloring each with a different color. This capacity is also used to visually show history, as described below. The visualizer supports immediate exploration of multiple simple constraints but has the limited capacity of 7 million itemsets, whereas the miner can handle very large datasets and more sophisticated constraints but requires a longer period of time for computation. Scenarios 3 and 4 above highlight the true power of PSV, in that the user first uses fast and lightweight exploration features on the visualizer side to find an appropriate value for the aggregate constraint max(courseSize). Then, when the user is satisfied, the constraint can instead be pushed to the more heavyweight miner side for efficiency. The more powerful computational miner engine will then prune more sets, imposing less burden on the visualizer. 2.2.4 Interaction Interactions that can be accomplished quickly and easily allow more fluid exploration than those that require significant effort and time to carry out. The PSV design philosophy is that simple operations should only require minimal interaction overhead. The rubber-sheet navigation, where the user sweeps out a box anywhere in the display, and then drags the corner of the box to stretch or shrink it, is just one example. Mouseover highlighting occurs whenever the cursor moves, so that the box currently under the cursor is highlighted and the names of the items in that highlighted itemset are shown in a status line below the display. Mouseover highlighting is a very fast operation that can be carried out many times each second because it does not require a redraw of the entire scene. Highlighting the superset of an itemset can be done through the shortcut of a single click on the itemset’s box. In contrast, the more general aggregation constraints that require setting several parameters are handled through a more heavyweight control panel interaction. The layout and rubber-sheet navigation provide a spatial substrate on which users can explore by coloring sets according to constraints, as described above. We do not support changes in the relative spatial position of itemsets, because it would then be impossible to usefully compare visual patterns at different times during the interaction. The underlying mechanism for coloring is to assign sets to a group, which has an assignable color. Users can create an arbitrary number of colored groups, so they can be a mechanism for tracking the history of both visualizer and miner constraints, by saving each interesting constraint choice as a separate group. The priority of groups is controllable by the user; when a particular set belongs to multiple enabled groups, the highest priority group color is shown. 2.2.5 Monitoring The visualizer shows several important status variables: - **total**: the total number of itemsets in the raw dataset; - **processed**: the number of itemsets processed so far by the miner; - **shown**: the number of itemsets passed on to the visualizer to display; - **rows**: the number of visualizer rows needed so far; - **maxrow**: the biggest visualizer row needed so far; Comparing these numbers helps users make choices: for instance, total vs. processed is the progress of the miner, and processed vs. shown shows the amount of filtering done by the miner. Comparing rows with maxrow shows the average distribution density of itemsets; and comparing shown with processed gives the user feedback on whether the miner constraints should be changed to make the filter tighter or looser. In addition to the mining issues discussed in Scenario 1 above, tightening filter constraints is especially important if the shown value begins to approach the finite capacity of the visualizer. Section 5 contains a discussion of that limit, which currently ranges from 1.5 to 7 million itemsets. 3. RELATED WORK Developing effective visualization tools for KDD is the subject of many studies, which can be sub-classified into two general categories. The first category focuses on data visualization systems. Examples include VisDB [13], Spotfire [3], Independence Diagrams [7], and Polaris [23]. These systems provide features to arrange and display data in various forms. For example, VisDB provides pixel-oriented techniques, parallel coordinates and stick figures to the user for exploring large datasets; Polaris provides a visual interface to help the user formulate complex queries against a multidimensional data cube. However, these systems are not connected to any data mining engine, nor are they designed to display data mining results. The PSV system, while allowing the raw data to be visualized and explored, provides a unified visual framework to the user to examine the data mining (partial) results as well. This framework allows the user to steer the data mining process midstream, and to compare between multiple data mining runs using the same alphabet. The second category of related work focuses on visualizing mining results. Examples include decision trees [4, 11] association rules [10, 12], and clustering [17]. The visual framework proposed by Ankerst et al [4] focuses on involving the user in the building of decision trees. The visualization method developed by Koren and Harel [17] is designed for cluster analysis and validation. The visual metaphors of these systems are very different from the PSV system, which uses a spatial layout based on the power set of an alphabet. The rule visualization system developed by Han and Cercone [10] focuses on the discretization of numeric attributes. The system uses parallel coordinates to show the mined association rules. In [12], Hofmann et al use a variant of mosaic plots, called double decker plots, to visualize associa- tion rules. Their focus is to help user understand association rules. PSV instead operates at the level of frequent sets and constraints. Furthermore, unlike the two previous frameworks, PSV supports the steering of the mining process midstream. Again, our use of a spatial layout based on a power set is unique. Accordion drawing was originally proposed for browsing phylogenetic trees [19, 6], and was then adapted for the task of visually comparing multiple aligned gene sequences [22]. The power set-based spatial layout used by PSV was first presented in a recent paper on a general framework for accordion drawing [5]. That paper deals mainly with the graphics challenges of navigation and rendering at interactive frame rates, whereas here we focus on issues of interest in data mining. In particular, that previous work did not support large alphabet sizes, whereas scalable algorithms to do so are the first contribution of this paper. The second contribution of this paper is in combining the visualizer with a data mining engine to provide a unified framework that handles the three levels of data mining on the filtered dataset, the entire dataset, and comparison between multiple datasets sharing the same alphabet. 4. ALGORITHMS The mapping from a set to a box that is drawn in a display window has three main stages: - convert from an \( m \)-set \( \{s_1, \ldots, s_m\} \) to its index \( e \) in the enumeration of the power set - convert from the enumeration index \( e \) to a \((row, column)\) position in the grid of boxes - convert from the \((row, column)\) grid position to a pixel location \((x, y)\) after rubber-sheet navigation has stretched and squished the grid Figure 3 gives an overview of the mapping process. The first two stages happen once when the set is loaded in PSV, whereas the last mapping must be recalculated for every frame. We present an efficient \( O(m) \) algorithm for the first stage of computing an enumeration index \( e \) given an arbitrary set in Section 4.1. The second stage is straightforward: \( row \) is \( e \) divided by the width of the grid, and \( column \) is \( e \mod \) modulo the width. The third stage uses the hierarchical data structures of the accordion drawing framework, and Section 4.2 describes the challenges of extending that data structure to handle large alphabet sizes. The details of the graphics algorithms that use the accordion drawing hierarchy for navigation and rendering are discussed in previous work [5]. 4.1 Enumeration The spatial layout described in Section 2.2.2.1 requires an enumeration of the power set. Although many possible ways to enumerate power sets exist, such as Gray codes [15], most of them are not suitable for creating meaningful visual patterns that are easy for data mining users to relate to. In the domain of data mining, lattice structures are often used to traverse the power set in order of set cardinality. We thus base our enumeration on a primary ordering by cardinality: all 1-sets appear before the 2-sets, which appear before the 3-sets, and so on. Within a given cardinality, we choose a lexicographic ordering for alphabet items, again to match the power set traversal order of many lattice-based mining algorithms. For example, an alphabet of \( \{a, b, \ldots, z\} \) yields the enumeration \( \{a\}, \{b\}, \ldots, \{z\}, \{ab\}, \{ac\}, \ldots, \{yz\}, \{abc\}, \ldots \). We assume the underlying alphabet has a canonical lexicographic ordering; for example, \( a = 1, b = 2, \ldots, z = 26 \). All computations involving sets assume that their internal item ordering is also lexicographically sorted. The challenge here is to devise an efficient way to convert between an arbitrary set and its index in this enumeration of the power set. We start with an example of computing the enumeration index \( c = 1206 \) of the 3-set \( \{d, h, k\} \) given an alphabet of size 26. The computation is done in two steps. Given a particular \( m \)-set, the first step is to compute the total number of \( k \)-sets, for all \( k < m \). These are all the sets with a strictly smaller cardinality. For the \( \{d, h, k\} \) example, the first step is to compute the total number of 1-sets and 2-sets, which is given by \( \binom{26}{1} + \binom{26}{2} = 26 + 325 = 351 \). The general formula, where \( A \) is the size of the alphabet, is \[ \sum_{i=1}^{m-1} \binom{A}{i}. \] The second step is to compute the the number of sets between the first \( m \)-set in the enumeration and the particular \( m \)-set of interest. For the \( \{d, h, k\} \) example, the second step computes three terms: - the number of 3-sets beginning with the 1-prefixes \( \{a\}, \{b\}, \) or \( \{c\}: \binom{26}{0} + \binom{26}{1} + \binom{26}{2} = 300 + 276 + 253 = 829 \). Picking \( a \) as a 1-prefix leaves 2 other choices that yield a 3-set containing \( a \): there are 25 other items left in the alphabet from which to choose 2. Similarly, when \( b \) is then picked as the 1-prefix, there are only 24 choices left; since the \( m \)-set is internally ordered lexicographically, neither \( a \) nor \( b \) are available any more as choices. - the number of 3-sets beginning with 2-prefixes \( \{d, e\}, \{d, f\}, \) or \( \{d, g\} \), which is given by \( \binom{26}{1} + \binom{26}{0} + \binom{26}{1} = 21 + 20 + 19 = 60 \); and - the number of 3-sets between the 3-prefixes \( \{d, h, i\} \) and \( \{d, h, j\} \), which is \( \binom{26}{0} + \binom{26}{1} = 1 + 1 = 2 \). This example suggests a formula of \[ \sum_{i=1}^{m} \sum_{j=p_i-1+1}^{p_i-1} \binom{A-j}{m-i} \] where \( p_i \) is the lexicographic index of the \( i \)-th element of the \( m \)-set and \( p_0 = 0 \). In the worst case, the number of terms required to compute this sum is linear in the size of the alphabet. However, we can collapse the inner sum to be just two terms by noticing that \[ \binom{n}{k} = \binom{n+1}{k+1} - \binom{n-j}{k+1}. \] We derive this lemma using the identity \( \binom{n}{k} = \binom{n-1}{k} + \binom{n-1}{k-1} \). The general formula is thus given by \[ \sum_{i=1}^{m} \left[ \binom{A-p_i-1}{m-i+1} - \binom{A-p_i+1}{m-i+1} \right] \] (1) Combining these two steps, we can compute the enumeration index as \[ \sum_{i=1}^{m-1} \left( \frac{A}{i} \right) + \sum_{i=1}^{m} \left[ \left( \frac{A - p_i - 1}{m - i + 1} \right) - \left( \frac{A - p_i + 1}{m - i + 1} \right) \right] \] The complexity of computing the index of a set can be reduced to \(O(m)\), where \(m\) is the cardinality of the set, by using a lookup table instead of explicitly calculating \(\binom{n}{k}\). We compute such a table of size \(n \times k\) using dynamic programming in a preprocessing step. As we discuss in Section 4.2.2, the maximum set size \(k\) needed for these computations is often much less than the alphabet size \(n\), but we do not want to hardwire any specific limit on maximum set size. Our time-space tradeoff is to use the lookup table for the common case of small \(k\); 25 in our current implementation, and explicitly compute the binomial coefficient for the rare case of a large \(k\). ### 4.2 SplitLine Hierarchy The accordion drawing framework that handles navigation and rendering in the visualizer uses the core data structure of SplitLines that represent a hierarchical subdivision of space. There are two hierarchies, one for the horizontal direction and one for the vertical. A SplitLine can be interpreted in two ways, as a line or as a region, as shown in Figure 4. First, the set can be considered as a linear list of lines, where each line falls between two spatial neighbors, and the line indices can be ordered from the minimum to the maximum absolute spatial position in window space. Second, it forms a hierarchical binary tree structure, where each SplitLine splits a higher-level region into two pieces. The highest-level region is the entire window, which is split by the root SplitLine. The SplitValue associated with a line gives the relative position of the region split as a number between 0 and 1, and dynamically changes with user navigation. The AbsoluteValue, the absolute position of each line in screen space, can be calculated in \(O(\log s)\) time, where \(s\) is the number of SplitLines, by recursively finding the absolute location of the boundaries of each line’s parent region up to the base case of the window boundaries. ![Figure 4: A set of SplitLines provides both a linear ordering and a hierarchical subdivision of space. Linearly, SplitLine B falls spatially between A and C. Hierarchically, it splits the region to the left of its parent SplitLine D in two, and its range is from the minimum SplitLine to its parent SplitLine D. The diagram here shows only the horizontal SplitLines; the vertical situation is analogous.](image) Previous applications that used accordion drawing statically instantiated the SplitLines hierarchy as a preprocessing step. A critical aspect in supporting large alphabets is to instead dynamically instantiate the SplitLines hierarchy, to handle the case where the distribution of itemsets to show in the viewer is very sparse with respect to the full power set. Figure 3 illustrates this important concept. We use the well-known red-black tree data structure [9] for maintaining nearly-balanced binary tree with an insertion and deletion cost of \(O(\log n)\), where each of the \(n\) tree nodes corresponds to a SplitLine. As discussed previously [5], we extend this data structure so that SplitValues are correctly maintained when rebalancing the tree through local rotations. Dynamic layout is important for horizontal SplitLines, since the number of rows grows very large, but the vertical SplitLine hierarchy is instantiated statically because it has a small fixed width: typically 64 or 128 columns. Figure 3 illustrates the algorithm for adding SplitLines only as needed. When a node is added to the scene, its enumeration index is calculated, then its row and column number. We check whether we need to instantiate the SplitLines flanking the itemsets: we may need to create both, just one, or no lines. In the example, the alphabet size is 8 and the fixed width of the grid is also 8. The first itemset \(a\) has enumeration index 0, calculated with Equation 2, and is mapped to grid position \((0,0)\). Since the minimum SplitLine already exists, only SplitLine 1 must be instantiated. The next itemset \(a,b,c,d,e,f,g\) has index 254 and is mapped to the bottom row: \((31,6)\). Similarly, the maximum SplitLine is already allocated, so line 31 is created. There is one empty row between the bottom and the top box on the screen. The third itemset, \(b\), is right next to the first one, and the horizontal line beneath it has already been created so the red-black tree storing the SplitLine hierarchy does not change. The fourth itemset, \(a,b,c,e\), has index 93 and is mapped to \((11,5)\). Two new SplitLines need to be created, and the red-black tree automatically rebalances so that line 11 is at its root instead of line 1. Finally, the fifth itemset \(a,b,d,h\) has index 100 and maps to \((12,4)\). Because line 12 has already been created, only one more SplitLine, namely 13, must be instantiated. The 255 items in the power set would require 31 horizontal SplitLines in a statically allocated grid of width 8; dynamic instantiation exploits the sparsity of the itemset distribution, creating only 5 lines. #### 4.2.1 Large Alphabets When the alphabet size \(A\) is large, the power set size \(P\) is a huge number: \(2^A\). Dynamic allocation of the SplitLine hierarchy, as discussed above, is necessary but not sufficient. The indices in the power enumeration do not fit into an integer or a long when the alphabet size is greater than 31 or 63, whereas we support alphabets over 40,000. The naive approach would be to simply switch data structures from integer to bignum everywhere that indices are used in the SplitLine hierarchy. However, computations using bignums are far more expensive than those using integers or longs, and storing them imposes a heavy memory footprint, so we would like to minimize their use. In contrast, the visualizer must store all \(N\) sets actually shown in main memory; so our algorithms are optimized for the case where \(N << P\). In the current implementation, \(N\) is limited to the range of 1.5 to 7 million sets, a number far smaller than the two trillion limit of integer data storage. Operations that use the number of shown sets \(N\) can be done much more efficiently, as opposed to those that use the alphabet size \(A\) or the power set size \(P\). 4.2.2 Maximum Set Size Often the dataset semantics dictate that the maximum set size is much smaller than the alphabet size. For example, it is essentially impossible to buy every item in a grocery store in one shopping trip or to take the thousands of courses offered at a university during the same term. Figure 2 Middle Left shows that the university enrollment dataset with alphabet size 4616 has a maximum set size of 13, and the market basket data in Figure 6 Bottom has a maximum set size of 115 out of the 1700 items in the alphabet. In contrast, although the particular software engineering dataset shown in Figure 6 Top has a maximum set size of 48 files checked in together during a bug fix out of 42,028 files in the alphabet, the domain semantics could allow a maximum set commensurate with the entire alphabet; for instance, if the copyright notice on top of each file needed changing, every file in the repository would be touched. An important property of our algorithm is that there is no hardwired prior limit on the maximum set size; we can accommodate a maximum set size up to the cardinality of the alphabet itself. 5. RESULTS AND DISCUSSION We now discuss the performance of the PowerSetViewer system with several datasets, documenting that PSV can scale to datasets of up to 7 million itemsets and alphabet sizes of over 40,000, while maintaining interactive rendering speeds of under 60 milliseconds per frame. The real-world course dataset shown in Figure 2 contains 95,776 itemsets that represent set of courses taken by a student during a particular term, with an alphabet size of 4616 courses offered. A second real-world Mozilla dataset, shown in Figure 6 Top, contains 33,407 itemsets that are the set of source code files checked in for a particular bug fix, with an alphabet size of 42,028 files in the repository. A third real-world market-basket dataset, shown in Figure 6 Bottom, has 515,575 itemsets that represent simultaneous store purchases by an individual, with an alphabet size of 1700 items for sale at a large electronics retailer [16]. Figure 5 shows the PSV performance results for memory usage and render speed for these three real-world datasets. The five datasets that share the same alphabet size of nearly 5 thousand items have the same initial memory requirements. The sixth Mozilla dataset has the much larger alphabet size of over 40,000 items, and requires more memory to handle the same number of itemsets. The graphs also show performance for two different families of synthetic datasets, sparse and dense. The dense synthetic datasets are the extreme case of the densest possible distribution: they are random samples from the full power set of an alphabet of 10,000 items. PSV can handle 7 million itemsets from this dataset before running out of memory, giving an upper bound on supportable dataset size. The limits of PSV depend on the distribution of the dataset within the power set. Sparse datasets require the instantiation of more SplitLines than dense ones, resulting in less total capacity in the visualizer. The sparse synthetic datasets have a distribution density roughly similar to the market-basket dataset, and use its alphabet. They represent the typical use case. PSV can handle over 1.5 million itemsets from this dataset family before its memory footprint outstrips the maximum Java heap size of 1.7GB. We reiterate that when using the miner as a filter, PSV as a client-server system can handle much larger datasets than the limits of the visualizer. Figure 5 First shows that the visualizer capacity limits are linear with respect to transaction size and depend on the sparsity of the distribution of the dataset with respect to the power set. Figure 5 Third shows that the rendering time is near-constant after a threshold dataset size has been reached, and this constant time is very small: 60 milliseconds per scene, allowing over 15 frames per second even in the worst case. The render time also depends linearly on the horizontal width of the grid. The full details of the data structures and algorithms that affect render time and memory usage are given elsewhere [6, 5]. We show these graphs here to document that our extension of these algorithms to support large alphabet sizes was indeed successful. The main challenge with PSV was to accommodate large alphabet and maximum set sizes, a difficult goal given the exponential nature of the power set used for spatial layout. We have done so, showing examples of PSV in use on alphabets ranging from 4,000 to over 40,000. Larger alphabets require more bits in the bignums used in the enumeration, affecting both the speed memory usage of PSV. All performance results are based on the following configuration: a 3.0GHz Pentium 4 with 2GB of main memory running SuSE Linux with a 2.6.5 kernel, Java 1.4.1_02-b06 (HotSpot), an nVidia Quadro FX 3000 graphics card, and an 800x600 pixel window. 6. CONCLUSION AND FUTURE WORK In this paper, we have reported the development of the PSV visualization system for lattice-based mining of power sets. It provides a unified visual framework at three different levels: data mining on the filtered dataset, the entire dataset, and comparison between multiple datasets sharing the same alphabet. PSV is also connected to a dynamic frequent set mining server, showcasing how this visualization approach helps users exploit the power of steerability. The key technical challenge for PSV is the size of the alphabet. We develop a fast scheme for computing the enumeration position of a given \( m \)-set in \( O(m) \) time, devise a dynamic data structure for managing SplitLines, and handle bignums carefully to avoid inefficiencies. We conduct empirical evaluation of the PSV system with both real and synthetic datasets. The latter datasets are designed to expose the limits of the current implementation of the PSV system. The empirical evaluation shows that the current version is capable of handling an alphabet size over 40,000 items and a transaction dataset exceeding 7 million transactions. Maintaining high frame rates is critical to the success of interactive visual mining, and our framework succeeds in keeping the time to render the entire visible scene below one Figure 6: Two real-world datasets. Top: The Mozilla dataset has 33,407 itemsets and an alphabet of 42,028. Bottom: The market-basket dataset has over a half-million itemsets and an alphabet of 1700 items. tenth of a second. There are several directions of future work that we would like to pursue. First, we would like to characterize the effectiveness of different enumeration orderings in helping users find visual patterns that convey important information about the dataset. Secondly, we would like to juxtapose different datasets sharing the same alphabet in a single window. Finally, we would like to adapt the current version of PSV to support comparison of different partial data mining outcomes for other data mining tasks, including sequential pattern mining [2], decision tree construction [4, 11], and clustering [17]. 7. REFERENCES
{"Source-Url": "http://www.cs.ubc.ca/cgi-bin/tr/2005/TR-2005-25.pdf", "len_cl100k_base": 9530, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34344, "total-output-tokens": 11589, "length": "2e13", "weborganizer": {"__label__adult": 0.0003466606140136719, "__label__art_design": 0.0009212493896484376, "__label__crime_law": 0.00054168701171875, "__label__education_jobs": 0.002628326416015625, "__label__entertainment": 0.00016808509826660156, "__label__fashion_beauty": 0.00021970272064208984, "__label__finance_business": 0.0007395744323730469, "__label__food_dining": 0.000354766845703125, "__label__games": 0.000957489013671875, "__label__hardware": 0.0013942718505859375, "__label__health": 0.000621795654296875, "__label__history": 0.0005526542663574219, "__label__home_hobbies": 0.00015938282012939453, "__label__industrial": 0.001007080078125, "__label__literature": 0.0004467964172363281, "__label__politics": 0.00037980079650878906, "__label__religion": 0.0006165504455566406, "__label__science_tech": 0.3876953125, "__label__social_life": 0.0001895427703857422, "__label__software": 0.056732177734375, "__label__software_dev": 0.54248046875, "__label__sports_fitness": 0.0002663135528564453, "__label__transportation": 0.0004954338073730469, "__label__travel": 0.000213623046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47713, 0.03912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47713, 0.51081]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47713, 0.87994]], "google_gemma-3-12b-it_contains_pii": [[0, 4282, false], [4282, 10425, null], [10425, 11435, null], [11435, 18070, null], [18070, 24546, null], [24546, 30721, null], [30721, 37182, null], [37182, 39504, null], [39504, 43441, null], [43441, 47713, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4282, true], [4282, 10425, null], [10425, 11435, null], [11435, 18070, null], [18070, 24546, null], [24546, 30721, null], [30721, 37182, null], [37182, 39504, null], [39504, 43441, null], [43441, 47713, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47713, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47713, null]], "pdf_page_numbers": [[0, 4282, 1], [4282, 10425, 2], [10425, 11435, 3], [11435, 18070, 4], [18070, 24546, 5], [24546, 30721, 6], [30721, 37182, 7], [37182, 39504, 8], [39504, 43441, 9], [43441, 47713, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47713, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
edc3b3d773a3211f6ab35556fe0e01f1a6c8af1b
The COIN-OR Optimization Suite: Open Source Tools for Optimization Part 4: Modeling with COIN Ted Ralphs INFORMS Computing Society Biennial Meeting Richmond, VA, 10 January 2015 Outline 1. Introduction 2. Solver Studio 3. Traditional Modeling Environments 4. Python-Based Modeling 5. Comparative Case Studies Outline 1 Introduction 2 Solver Studio 3 Traditional Modeling Environments 4 Python-Based Modeling 5 Comparative Case Studies Generally speaking, we follow a four-step process in modeling. - Develop an abstract model. - Populate the model with data. - Solve the model. - Analyze the results. These four steps generally involve different pieces of software working in concert. For mathematical programs, the modeling is often done with an algebraic modeling system. Data can be obtained from a wide range of sources, including spreadsheets. Solution of the model is usually relegated to specialized software, depending on the type of model. Most existing modeling software can be used with COIN solvers. - **Commercial Systems** - GAMS - MPL - AMPL - AIMMS - **Python-based Open Source Modeling Languages and Interfaces** - Pyomo - PuLP/Dippy - CyLP (provides API-level interface) - yaposib Other Front Ends (mostly open source) - FLOPC++ (algebraic modeling in C++) - CMPL - MathProg.jl (modeling language built in Julia) - GMPL (open-source AMPL clone) - ZMPL (stand-alone parser) - SolverStudio (spreadsheet plug-in: www.OpenSolver.org) - Open Office spreadsheet - R (RSymphony Plug-in) - Matlab (OPTI) - Mathematica COIN-OR is an open source project dedicated to the development of open source software for solving operations research problems. COIN-OR distributes a free and open source suite of software that can handle all the classes of problems we’ll discuss. - Clp (LP) - Cbc (MILP) - Ipopt (NLP) - SYMPHONY (MILP, BMILP) - DIP (MILP) - Bonmin (Convex MINLP) - Couenne (Non-convex MINLP) - Optimization Services (Interface) COIN also develops standards and interfaces that allow software components to interoperate. Check out the Web site for the project at http://www.coin-or.org Although not required, it’s useful to know something about how modeling languages interface with solvers. In many cases, modeling languages interface with solvers by writing out an intermediate file that the solver then reads in. It is also possible to generate these intermediate files directly from a custom-developed code. **Common file formats** - **MPS format**: The original standard developed by IBM in the days of Fortran, not easily human-readable and only supports (integer) linear modeling. - **LP format**: Developed by CPLEX as a human-readable alternative to MPS. - **.nl format**: AMPL’s intermediate format that also supports non-linear modeling. - **OSIL**: an open, XML-based format used by the Optimization Services framework of COIN-OR. - **Python C Extension**: Several projects interface through a Python extension that can be easily SolverStudio (Andrew Mason) - Spreadsheet optimization has had a (deservedly) bad reputation for many years. - SolverStudio will change your mind about that! - SolverStudio provides a full-blown modeling environment inside a spreadsheet. - Edit and run the model. - Populate the model from the spreadsheet. - In many of the examples in the remainder of the tutorial, I will show the models in SolverStudio. In Class Exercise: Install Solver Studio! T.K. Ralphs (Lehigh University) COIN-OR January 10, 2015 Outline 1. Introduction 2. Solver Studio 3. Traditional Modeling Environments 4. Python-Based Modeling 5. Comparative Case Studies Traditional Modeling Environments - AMPL is one of the most commonly used modeling languages, but many other languages, including GAMS, are similar in concept. - AMPL has many of the features of a programming language, including loops and conditionals. - Most available solvers will work with AMPL. - GMPL and ZIMPL are open source languages that implement subsets of AMPL. - The Python-based languages to be introduced later have similar functionality, but a more powerful programming environment. - AMPL will work with all of the solvers we’ve discussed so far. - You can also submit AMPL models to the NEOS server. - Student versions can be downloaded from www.ampl.com. A bond portfolio manager has $100K to allocate to two different bonds. <table> <thead> <tr> <th>Bond</th> <th>Yield</th> <th>Maturity</th> <th>Rating</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>4</td> <td>3</td> <td>A (2)</td> </tr> <tr> <td>B</td> <td>3</td> <td>4</td> <td>Aaa (1)</td> </tr> </tbody> </table> The goal is to maximize total return subject to the following limits. - The average rating must be at most 1.5 (lower is better). - The average maturity must be at most 3.6 years. Any cash not invested will be kept in a non-interest bearing account and is assumed to have an implicit rating of 0 (no risk). In many ways, AMPL is like any other programming language. **Example**: Bond Portfolio Model ```ampl ampl: option solver clp; ampl: var X1; ampl: var X2; ampl: maximize yield: 4*X1 + 3*X2; ampl: subject to cash: X1 + X2 <= 100; ampl: subject to rating: 2*X1 + X2 <= 150; ampl: subject to maturity: 3*X1 + 4*X2 <= 360; ampl: subject to X1_limit: X1 >= 0; ampl: subject to X2_limit: X2 >= 0; ampl: solve; ... ampl: display X1; X1 = 50 ampl: display X2; X2 = 50 ``` You can type the commands into a file and then load them. This makes it easy to modify your model later. Example: ampl: option solver clp; ampl: model bonds_simple.mod; ampl: solve; ... ampl: display X1; X1 = 50 ampl: display X2; X2 = 50 Suppose we don’t know ahead of time what bonds we want to include or what the input data describing each bond will be. For this purpose, we can develop an abstract algebraic model without specifying values for the input data. Components of an abstract algebraic model are - **Data** - **Sets**: Lists of stocks and other investment options - **Parameters**: Numerical inputs such as budget restrictions, historical returns, etc. - **Model** - **Variables**: Values in the model that need to be decided upon. - **Objective Function**: A function of the variable values to be maximized or minimized. - **Constraints**: Functions of the variable values that must lie within given bounds. Example: General Bond Portfolio Model (bonds.mod) ```plaintext set bonds; # bonds available param yield {bonds}; # yields param rating {bonds}; # ratings param maturity {bonds}; # maturities param max_rating; # Maximum average rating allowed param max_maturity; # Maximum maturity allowed param max_cash; # Maximum available to invest var buy {bonds} >= 0; # amount to invest in bond i maximize total_yield : sum {i in bonds} yield[i] * buy[i]; subject to cash_limit : sum {i in bonds} buy[i] <= max_cash; subject to rating_limit : sum {i in bonds} rating[i]*buy[i] <= max_cash*max_rating; subject to maturity_limit : sum {i in bonds} maturity[i]*buy[i] <= max_cash*max_maturity; ``` T.K. Ralphs (Lehigh University) Getting the Data \texttt{(bonds.dat)} - The data to populate the model can come from a number of sources. - AMPL has its own format for specifying the data in the model. \begin{verbatim} set bonds := A B; param : yield rating maturity := A 4 2 3 B 3 1 4; param max_cash := 100; param max_rating 1.5; param max_maturity 3.6; \end{verbatim} Solving the Model (bonds.run) ampl: model bonds.mod; ampl: data bonds.dat; ampl: solve; ... ampl: display buy; buy [*] := A 50 B 50 ; Suppose we want to increase available production hours by 2000. To resolve from scratch, simply modify the data file and reload. ```ampl ampl: reset data; ampl: data bonds_alt.dat; ampl: solve; ... ampl: display buy; buy [*] := A 30 B 70 ;``` ``` Modifying Individual Data Elements Instead of resetting all the data, you can modify one element. ```ampl ampl: reset data max_cash; ampl: data; ampl data: param max_cash := 150; ampl data: solve; ... ampl: display buy; buy [*] := A 45 B 105 ;``` T.K. Ralphs (Lehigh University) Now suppose we want to add another type of bond. ```plaintext set bonds := A B C; param : yield rating maturity := A 4 2 3 B 3 1 4 C 6 3 2; param max_cash := 100; param max_rating 1.3; param max_maturity 3.8; ``` ampl: reset data; ampl: data bonds_extended.dat; ampl: solve; .. ampl: display buy; buy [*] := A 0 B 85 C 15 ; Another obvious source of data is a spreadsheet, such as Excel. AMPL has commands for accessing data from a spreadsheet directly from the language. An alternative is to use SolverStudio. SolverStudio allows the model to be composed within Excel and imports the data from an associated sheet. Results can be printed to a window or output to the sheet for further analysis. Note that in our AMPL model, we essentially had three “features” of a bond that we wanted to take into account. - Maturity - Rating - Yield We constrained the level of two of these and then optimized the third one. The constraints for the features all have the same basic form. What if we wanted to add another feature? We can make the list of features a set and use the concept of a two-dimensional parameter to create a table of bond data. The Generalized Model (bonds_features.mod) set bonds; set features; param bond_data {bonds, features}; param limits{features}; param yield{bonds}; param max_cash; var buy {bonds} >= 0; maximize obj : sum {i in bonds} yield[i] * buy[i]; subject to cash_limit : sum {i in bonds} buy[i] <= max_cash; subject to limit_constraints {f in features}: sum {i in bonds} bond_data[i, f]*buy[i] <= max_cash*limits[f]; Outline 1. Introduction 2. Solver Studio 3. Traditional Modeling Environments 4. Python-Based Modeling 5. Comparative Case Studies Example: Facility Location Problem - We have $n$ locations and $m$ customers to be served from those locations. - There is a fixed cost $c_j$ and a capacity $W_j$ associated with facility $j$. - There is a cost $d_{ij}$ and demand $w_{ij}$ for serving customer $i$ from facility $j$. We have two sets of binary variables. - $y_j$ is 1 if facility $j$ is opened, 0 otherwise. - $x_{ij}$ is 1 if customer $i$ is served by facility $j$, 0 otherwise. **Capacitated Facility Location Problem** $$ \begin{align*} \text{min} & \quad \sum_{j=1}^{n} c_j y_j + \sum_{i=1}^{m} \sum_{j=1}^{n} d_{ij} x_{ij} \\ \text{s.t.} & \quad \sum_{j=1}^{n} x_{ij} = 1 \quad \forall i \\ & \quad \sum_{i=1}^{m} w_{ij} x_{ij} \leq W_j \quad \forall j \\ & \quad x_{ij} \leq y_j \quad \forall i, j \\ & \quad x_{ij}, y_j \in \{0, 1\} \quad \forall i, j \end{align*} $$ from products import REQUIREMENT, PRODUCTS from facilities import FIXED_CHARGE, LOCATIONS, CAPACITY prob = LpProblem("Facility_Location") ASSIGNMENTS = [(i, j) for i in LOCATIONS for j in PRODUCTS] assign_vars = LpVariable.dicts("x", ASSIGNMENTS, 0, 1, LpBinary) use_vars = LpVariable.dicts("y", LOCATIONS, 0, 1, LpBinary) prob += lpSum(use_vars[i] * FIXED_COST[i] for i in LOCATIONS) for j in PRODUCTS: prob += lpSum(assign_vars[(i, j)] for i in LOCATIONS) == 1 for i in LOCATIONS: prob += lpSum(assign_vars[(i, j)] * REQUIREMENT[j] for j in PRODUCTS) <= CAPACITY * use_vars[i] prob.solve() for i in LOCATIONS: if use_vars[i].varValue > 0: print "Location ", i, " is assigned: ", print [j for j in PRODUCTS if assign_vars[(i, j)].varValue > 0] PuLP Basics: Facility Location Example # The requirements for the products REQUIREMENT = { 1 : 7, 2 : 5, 3 : 3, 4 : 2, 5 : 2, } # Set of all products PRODUCTS = REQUIREMENT.keys() PRODUCTS.sort() # Costs of the facilities FIXED_COST = { 1 : 10, 2 : 20, 3 : 16, 4 : 1, 5 : 2, } # Set of facilities LOCATIONS = FIXED_COST.keys() LOCATIONS.sort() # The capacity of the facilities CAPACITY = 8 DiPPy is an extension of PuLP that provides the ability to model decomposition-based structure. With DipPy, one can implement customized subroutines for column generation, cut generation, heuristics, branching, etc. in Python. The framework handles the incorporation of these into an overall branch and Xxx algorithm. This makes it easy to get up and running with relatively sophisticated methodology. It also makes it easy to compare methodologies with as many variables fixed as possible. Switching from branch and cut to branch and price is as easy as changing a parameter. With SolverStudio, it can even be done from a spreadsheet!. There are defaults for all methods—the user need not implement anything to utilize the underlying solver. from products import REQUIREMENT, PRODUCTS from facilities import FIXED_CHARGE, LOCATIONS, CAPACITY prob = dippy.DipProblem("Facility_Location") ASSIGNMENTS = [(i, j) for i in LOCATIONS for j in PRODUCTS] assign_vars = LpVariable.dicts("x", ASSIGNMENTS, 0, 1, LpBinary) use_vars = LpVariable.dicts("y", LOCATIONS, 0, 1, LpBinary) prob += lpSum(use_vars[i] * FIXED_COST[i] for i in LOCATIONS) for j in PRODUCTS: prob += lpSum(assign_vars[(i, j)] for i in LOCATIONS) == 1 for i in LOCATIONS: prob.relaxation[i] += lpSum(assign_vars[(i, j)] * REQUIREMENT[j] for j in PRODUCTS) <= CAPACITY * use_vars[i] dippy.Solve(prob, {doPriceCut:1}) for i in LOCATIONS: if use_vars[i].varValue > 0: print "Location ", i, " is assigned: ", print [j for j in PRODUCTS if assign_vars[(i, j)].varValue > 0] Updated upstream Dippy Callbacks ```python def solve_subproblem(prob, index, redCosts, convexDual): ... return knapsack01(obj, weights, CAPACITY) def knapsack01(obj, weights, capacity): ... return solution def first_fit(prob): ... return bvs prob.init_vars = first_fit def choose_branch(prob, sol): ... return ([], down_branch_ub, up_branch_lb, []) def generate_cuts(prob, sol): ... return new_cuts def heuristics(prob, xhat, cost): ... return sols dippy.Solve(prob, {'doPriceCut': '1'}) ``` def solve_subproblem(prob, index, redCosts, convexDual): ... return knapsack01(obj, weights, CAPACITY) def knapsack01(obj, weights, capacity): ... return solution def first_fit(prob): ... return bvs prob.init_vars = first_fit def choose_branch(prob, sol): ... return ([], down_branch_ub, up_branch_lb, []) def generate_cuts(prob, sol): ... return new_cuts def heuristics(prob, xhat, cost): ... return sols dippy.Solve(prob, {'doPriceCut': '1'}) In contrast to PuLP, Pyomo allows the creation of “abstract” models, like other AMLs. Note, however, that it can also be used to create concrete models, like PuLP. Like, it can read data from a wide range of sources. It also allows constraints to involve more general functions. As we will see, this power comes with some increased complexity. model = ConcreteModel() Bonds, Features, BondData, Liabilities = read_data('ded.dat') Periods = range(len(Liabilities)) model.buy = Var(Bonds, within=NonNegativeReals) model.cash = Var(Periods, within=NonNegativeReals) model.obj = Objective(expr=model.cash[0] + sum(BondData[b, 'Price'] * model.buy[b] for b in Bonds), sense=minimize) def cash_balance_rule(model, t): return (model.cash[t-1] - model.cash[t] + sum(BondData[b, 'Coupon'] * model.buy[b] for b in Bonds if BondData[b, 'Maturity'] >= t) + sum(BondData[b, 'Principal'] * model.buy[b] for b in Bonds if BondData[b, 'Maturity'] == t) == Liabilities[t]) model.cash_balance = Constraint(Periods[1:], rule=cash_balance_rule) 1. Introduction 2. Solver Studio 3. Traditional Modeling Environments 4. Python-Based Modeling 5. Comparative Case Studies from pulp import LpProblem, LpVariable, lpSum, LpMaximize, value prob = LpProblem("Dedication Model", LpMaximize) X1 = LpVariable("X1", 0, None) X2 = LpVariable("X2", 0, None) prob += 4*X1 + 3*X2 prob += X1 + X2 <= 100 prob += 2*X1 + X2 <= 150 prob += 3*X1 + 4*X2 <= 360 prob.solve() print 'Optimal total cost is: ', value(prob.objective) print "X1 :", X1.varValue print "X2 :", X2.varValue Notes About the Model - Like the simple AMPL model, we are not using indexing or any sort of abstraction here. - The syntax is very similar to AMPL. - To achieve separation of data and model, we use Python’s `import` mechanism. from pulp import LpProblem, LpVariable, lpSum, LpMaximize, value from bonds import bonds, max_rating, max_maturity, max_cash prob = LpProblem("Bond Selection Model", LpMaximize) buy = LpVariable.dicts('bonds', bonds.keys(), 0, None) prob += lpSum(bonds[b]['yield'] * buy[b] for b in bonds) prob += lpSum(buy[b] for b in bonds) <= max_cash, "cash" prob += (lpSum(bonds[b]['rating'] * buy[b] for b in bonds) <= max_cash*max_rating, "ratings") prob += (lpSum(bonds[b]['maturity'] * buy[b] for b in bonds) <= max_cash*max_maturity, "maturities") Notes About the Model - We can use Python’s native `import` mechanism to get the data. - Note, however, that the data is read and stored `before` the model. - This means that we don’t need to declare sets and parameters. - Carriage returns are syntactic (parentheses imply line continuation). **Constraints** - Naming of constraints is optional and only necessary for certain kinds of post-solution analysis. - Constraints are added to the model using a very intuitive syntax. - Objectives are nothing more than expressions that are to be optimized rather than explicitly constrained. **Indexing** - Indexing in Python is done using the native dictionary data structure. - Note the extensive use of comprehensions, which have a syntax very similar to quantifiers in a mathematical model. Bond Portfolio Example: Solution in PuLP ```python prob.solve() epsilon = .001 print 'Optimal purchases:' for i in bonds: if buy[i].varValue > epsilon: print 'Bond', i, ":'', buy[i].varValue ``` bonds = {'A': {'yield': 4, 'rating': 2, 'maturity': 3,}, 'B': {'yield': 3, 'rating': 1, 'maturity': 4,}, max_cash = 100 max_rating = 1.5 max_maturity = 3.6 Notes About the Data Import - We are storing the data about the bonds in a “dictionary of dictionaries.” - With this data structure, we don’t need to separately construct the list of bonds. - We can access the list of bonds as `bonds.keys()`. - Note, however, that we still end up hard-coding the list of features and we must repeat this list of features for every bond. - We can avoid this using some advanced Python programming techniques, but SolverStudio makes this easy. buy = LpVariable.dicts('bonds', bonds, 0, None) for f in features: if limits[f] == "Opt": if sense[f] == '>': prob += lpSum(bond_data[b, f] * buy[b] for b in bonds) else: prob += lpSum(-bond_data[b, f] * buy[b] for b in bonds) else: if sense[f] == '>': prob += (lpSum(bond_data[b,f]*buy[b] for b in bonds) >= max_cash*limits[f], f) else: prob += (lpSum(bond_data[b,f]*buy[b] for b in bonds) <= max_cash*limits[f], f) prob += lpSum(buy[b] for b in bonds) <= max_cash, "cash" Notes About the SolverStudio PuLP Model - We’ve explicitly allowed the option of optimizing over one of the features, while constraining the others. - Later, we’ll see how to create tradeoff curves showing the tradeoffs among the constraints imposed on various features. Definition 1 *Dedication* or *cash flow matching* refers to the funding of known future liabilities through the purchase of a portfolio of risk-free non-callable bonds. **Notes:** - Dedication is used to eliminate interest rate risk. - Dedicated portfolios do not have to be managed. - The goal is to construct such portfolio at a minimal price from a set of available bonds. - This is a multi-period model. Example: Portfolio Dedication - A pension fund faces liabilities totalling $\ell_j$ for years $j = 1, \ldots, T$. - The fund wishes to dedicate these liabilities via a portfolio comprised of $n$ different types of bonds. - Bond type $i$ costs $c_i$, matures in year $m_i$, and yields a yearly coupon payment of $d_i$ up to maturity. - The principal paid out at maturity for bond $i$ is $p_i$. We assume that for each year \( j \) there is at least one type of bond \( i \) with maturity \( m_i = j \), and there are none with \( m_i > T \). Let \( x_i \) be the number of bonds of type \( i \) purchased, and let \( z_j \) be the cash on hand at the beginning of year \( j \) for \( j = 0, \ldots, T \). Then the dedication problem is the following LP. \[ \begin{align*} \min_{(x,z)} & \quad z_0 + \sum_i c_i x_i \\ \text{s.t.} & \quad z_{j-1} - z_j + \sum_{\{i: m_i \geq j\}} d_i x_i + \sum_{\{i: m_i = j\}} p_i x_i = \ell_j, \quad (j = 1, \ldots, T - 1) \\ & \quad z_T + \sum_{\{i: m_i = T\}} (p_i + d_i) x_i = \ell_T. \\ & \quad z_j \geq 0, j = 1, \ldots, T \\ & \quad x_i \geq 0, i = 1, \ldots, n \end{align*} \] Here is the model for the portfolio dedication example. ```AMPL set Bonds; param T > 0 integer; param Liabilities {1..T}; param Price {Bonds}; param Maturity {Bonds}; param Coupon {Bonds}; param Principal {Bonds}; var buy {Bonds} >= 0; var cash {0..T} >= 0; minimize total_cost : cash[0] + sum {i in Bonds} Price[i] * buy[i] subject to cash_balance {t in 1..T}: cash[t-1] - cash[t] + sum{i in Bonds : Maturity[i] >= t} Coupon[i] * buy[i] + sum{i in Bonds : Maturity[i] = t} Principal[i] * buy[i] = Liabilities[t]; ``` In multi-period models, we have to somehow represent the set of periods. Such a set is different from a generic set because it involves ranged data. We must somehow do arithmetic with elements of this set in order to express the model. In AMPL, a ranged set can be constructed using the syntax $1..T$. Both endpoints are included in the range. Another important feature of the above model is the use of conditionals in the limits of the sum. Conditionals can be used to choose a subset of the items in a given set satisfying some condition. Bonds, Features, BondData, Liabilities = read_data('ded.dat') prob = LpProblem("Dedication Model", LpMinimize) buy = LpVariable.dicts("buy", Bonds, 0, None) cash = LpVariable.dicts("cash", range(len(Liabilities)), 0, None) prob += cash[0] + lpSum(BondData[b, 'Price'] * buy[b] for b in Bonds) for t in range(1, len(Liabilities)): prob += (cash[t-1] - cash[t] + lpSum(BondData[b, 'Coupon'] * buy[b] for b in Bonds if BondData[b, 'Maturity'] >= t) + lpSum(BondData[b, 'Principal'] * buy[b] for b in Bonds if BondData[b, 'Maturity'] == t) == Liabilities[t], "cash_balance_%s"%t) We are parsing the AMPL data file with a custom-written function `read_data` to obtain the data. The data is stored in a two-dimensional table (dictionary with tuples as keys). The `range` operator is used to create ranged sets in Python. The upper endpoint is not included in the range and ranges start at 0 by default (`range(3) = [0, 1, 2]`). The `len` operator gets the number of elements in a given data structure. Python also supports conditions in comprehensions, so the model reads naturally in Python’s native syntax. See also `FinancialModels.xlsx:Dedication-PuLP`. Concrete Pyomo Model for Dedication (dedication-PyomoConcrete.py) Bonds, Features, BondData, Liabilities = read_data('ded.dat') Periods = range(len(Liabilities)) model.buy = Var(Bonds, within=NonNegativeReals) model.cash = Var(Periods, within=NonNegativeReals) model.obj = Objective(expr=model.cash[0] + sum(BondData[b, 'Price']*model.buy[b] for b in Bonds), sense=minimize) def cash_balance_rule(model, t): return (model.cash[t-1] - model.cash[t] + sum(BondData[b, 'Coupon'] * model.buy[b] for b in Bonds if BondData[b, 'Maturity'] >= t) + sum(BondData[b, 'Principal'] * model.buy[b] for b in Bonds if BondData[b, 'Maturity'] == t) == Liabilities[t]) model.cash_balance = Constraint(Periods[1:], rule=cash_balance_rule) Notes on the Concrete Pyomo Model - This model is almost identical to the PuLP model. - The only substantial difference is the way in which constraints are defined, using “rules.” - Indexing is implemented by specifying additional arguments to the rule functions. - When the rule function specifies an indexed set of constraints, the indices are passed through the arguments to the function. - The model is constructed by looping over the index set, constructing each associated constraint. - Note the use of the Python slice operator to extract a subset of a ranged set. Instantiating and Solving a Pyomo Model - The easiest way to solve a Pyomo Model is from the command line. `pyomo -solver=cbc -summary dedication-PyomoConcrete.py` - It is instructive, however, to see what is going on under the hood. - Pyomo explicitly creates an “instance” in a solver-independent form. - The instance is then translated into a format that can be understood by the chosen solver. - After solution, the result is imported back into the instance class. - We can explicitly invoke these steps in a script. - This gives a bit more flexibility in post-solution analysis. epsilon = .001 opt = SolverFactory("cbc") instance = model.create() results = opt.solve(instance) instance.load(results) print "Optimal strategy" for b in instance.buy: if instance.buy[b].value > epsilon: print 'Buy %f of Bond %s' %(instance.buy[b].value, b) Abstract Pyomo Model for Dedication ( dedicate-PyomoAbstract.py) ```python model = AbstractModel() model.Periods = Set() model.Bonds = Set() model.Price = Param(model.Bonds) model.Maturity = Param(model.Bonds) model.Coupon = Param(model.Bonds) model.Principal = Param(model.Bonds) model.Liabilities = Param(range(9)) model.buy = Var(model.Bonds, within=NonNegativeReals) model.cash = Var(range(9), within=NonNegativeReals) ``` def objective_rule(model): return model.cash[0] + sum(model.Price[b]*model.buy[b] for b in model.Bonds) model.objective = Objective(sense=minimize, rule=objective_rule) def cash_balance_rule(model, t): return (model.cash[t-1] - model.cash[t] + sum(model.Coupon[b] * model.buy[b] for b in model.Bonds if model.Maturity[b] >= t) + sum(model.Principal[b] * model.buy[b] for b in model.Bonds if model.Maturity[b] == t) == model.Liabilities[t]) model.cash_balance = Constraint(range(1, 9), rule=cash_balance_rule) In an abstract model, we declare sets and parameters abstractly. After declaration, they can be used without instantiation, as in AMPL. When creating the instance, we explicitly pass the name of an AMPL-style data file, which is used to instantiate the concrete model. ```python instance = model.create('dedication.dat') ``` See also FinancialModels.xlsx:Dedication-Pyomo. Example: Short Term Financing A company needs to make provisions for the following cash flows over the coming five months: $-150K, -100K, 200K, -200K, 300K$. - The following options for obtaining/using funds are available, - The company can borrow up to $100K at 1\%$ interest per month, - The company can issue a 2-month zero-coupon bond yielding 2\% interest over the two months, - Excess funds can be invested at 0.3\% monthly interest. How should the company finance these cash flows if no payment obligations are to remain at the end of the period? All investments are risk-free, so there is no stochasticity. What are the decision variables? - $x_i$, the amount drawn from the line of credit in month $i$, - $y_i$, the number of bonds issued in month $i$, - $z_i$, the amount invested in month $i$, What is the goal? - To maximize the cash on hand at the end of the horizon. The problem can then be modeled as the following linear program: \[ \begin{align*} \max_{(x,y,z,v) \in \mathbb{R}^{12}} & \quad f(x, y, z, v) = v \\ \text{s.t.} & \\ & x_1 + y_1 - z_1 = 150 \\ & x_2 - 1.01x_1 + y_2 - z_2 + 1.003z_1 = 100 \\ & x_3 - 1.01x_2 + y_3 - 1.02y_1 - z_3 + 1.003z_2 = -200 \\ & x_4 - 1.01x_3 - 1.02y_2 - z_4 + 1.003z_3 = 200 \\ & \quad -1.01x_4 - 1.02y_3 - v + 1.003z_4 = -300 \\ & 100 - x_i \geq 0 \quad (i = 1, \ldots, 4) \\ & x_i \geq 0 \quad (i = 1, \ldots, 4) \\ & y_i \geq 0 \quad (i = 1, \ldots, 3) \\ & z_i \geq 0 \quad (i = 1, \ldots, 4) \\ & v \geq 0. \end{align*} \] AMPL Model for Short Term Financing (short_term_financing.mod) param T > 0 integer; param cash_flow {0..T}; param credit_rate; param bond_yield; param invest_rate; maximize wealth : invest[T]; subject to balance {t in 0..T} : credit[t] - (1 + credit_rate) * credit[t-1] + bonds[t] - (1 + bond_yield) * bonds[t-bond_maturity] - invest[t] + (1 + invest_rate) * invest[t-1] = cash_flow[t]; subject to initial_credit : credit[-1] = 0; subject to final_credit : credit[T] = 0; subject to initial_invest : invest[-1] = 0; subject to initial_bonds {t in 1..bond_maturity} : bonds[-t] = 0; subject to final_bonds {t in T+1-bond_maturity..T} : bonds[t] = 0; These are the data for the example. ```AMPL param T := 5; param : cash_flow := 0 150 1 100 2 -200 3 200 4 -50 5 -300; param credit_rate := .01; param bond_yield := .02; param bond_maturity := 3; param invest_rate := .003; ``` Note that we’ve created some “dummy” variables for use of bonds and credit and investment before time zero. These are only for convenience to avoid edge cases when expressing the constraints. Again, we see the use of the parameter $T$ to capture the number of periods. See also *FinancialModels.xlsx:Short-term-financing-AMPL*. from short_term_financing_data import cash, c_rate, b_yield from short_term_financing_data import b_maturity, i_rate T = len(cash) credit = LpVariable.dicts("credit", range(-1, T), 0, None) bonds = LpVariable.dicts("bonds", range(-b_maturity, T), 0, None) invest = LpVariable.dicts("invest", range(-1, T), 0, None) prob += invest[T-1] for t in range(0, T): prob += (credit[t] - (1 + c_rate)* credit[t-1] + bonds[t] - (1 + b_yield) * bonds[t-int(b_maturity)] - invest[t] + (1 + i_rate) * invest[t-1] == cash[t]) prob += credit[-1] == 0 prob += credit[T-1] == 0 prob += invest[-1] == 0 for t in range(-int(b_maturity), 0): prob += bonds[t] == 0 for t in range(T-int(b_maturity), T): prob += bonds[t] == 0
{"Source-Url": "http://coral.ie.lehigh.edu/~ted/files/coin-or/slides/COINModeling.pdf", "len_cl100k_base": 8588, "olmocr-version": "0.1.53", "pdf-total-pages": 69, "total-fallback-pages": 0, "total-input-tokens": 97970, "total-output-tokens": 11550, "length": "2e13", "weborganizer": {"__label__adult": 0.0005035400390625, "__label__art_design": 0.0007023811340332031, "__label__crime_law": 0.0005221366882324219, "__label__education_jobs": 0.00331878662109375, "__label__entertainment": 0.00017499923706054688, "__label__fashion_beauty": 0.0002758502960205078, "__label__finance_business": 0.01064300537109375, "__label__food_dining": 0.0005998611450195312, "__label__games": 0.0014095306396484375, "__label__hardware": 0.0010251998901367188, "__label__health": 0.0006504058837890625, "__label__history": 0.0004224777221679687, "__label__home_hobbies": 0.0003311634063720703, "__label__industrial": 0.0016412734985351562, "__label__literature": 0.0003485679626464844, "__label__politics": 0.000537872314453125, "__label__religion": 0.0004422664642333984, "__label__science_tech": 0.12396240234375, "__label__social_life": 0.0002448558807373047, "__label__software": 0.03265380859375, "__label__software_dev": 0.81787109375, "__label__sports_fitness": 0.0005769729614257812, "__label__transportation": 0.000972270965576172, "__label__travel": 0.0002942085266113281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30320, 0.03386]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30320, 0.42856]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30320, 0.78297]], "google_gemma-3-12b-it_contains_pii": [[0, 180, false], [180, 312, null], [312, 443, null], [443, 962, null], [962, 1230, null], [1230, 1560, null], [1560, 2135, null], [2135, 2995, null], [2995, 2995, null], [2995, 3407, null], [3407, 3509, null], [3509, 3641, null], [3641, 4316, null], [4316, 4847, null], [4847, 5312, null], [5312, 5552, null], [5552, 6251, null], [6251, 6981, null], [6981, 7339, null], [7339, 7474, null], [7474, 7726, null], [7726, 8012, null], [8012, 8246, null], [8246, 8361, null], [8361, 8737, null], [8737, 9184, null], [9184, 9597, null], [9597, 9729, null], [9729, 10576, null], [10576, 11372, null], [11372, 11803, null], [11803, 12553, null], [12553, 13390, null], [13390, 13407, null], [13407, 13931, null], [13931, 14424, null], [14424, 14772, null], [14772, 15517, null], [15517, 15644, null], [15644, 16041, null], [16041, 16270, null], [16270, 16834, null], [16834, 17627, null], [17627, 17837, null], [17837, 18075, null], [18075, 18552, null], [18552, 19156, null], [19156, 19428, null], [19428, 19838, null], [19838, 20233, null], [20233, 20959, null], [20959, 21492, null], [21492, 22036, null], [22036, 22648, null], [22648, 23230, null], [23230, 23984, null], [23984, 24557, null], [24557, 25153, null], [25153, 25426, null], [25426, 25854, null], [25854, 26476, null], [26476, 26853, null], [26853, 27416, null], [27416, 27747, null], [27747, 28350, null], [28350, 29015, null], [29015, 29254, null], [29254, 29586, null], [29586, 30320, null]], "google_gemma-3-12b-it_is_public_document": [[0, 180, true], [180, 312, null], [312, 443, null], [443, 962, null], [962, 1230, null], [1230, 1560, null], [1560, 2135, null], [2135, 2995, null], [2995, 2995, null], [2995, 3407, null], [3407, 3509, null], [3509, 3641, null], [3641, 4316, null], [4316, 4847, null], [4847, 5312, null], [5312, 5552, null], [5552, 6251, null], [6251, 6981, null], [6981, 7339, null], [7339, 7474, null], [7474, 7726, null], [7726, 8012, null], [8012, 8246, null], [8246, 8361, null], [8361, 8737, null], [8737, 9184, null], [9184, 9597, null], [9597, 9729, null], [9729, 10576, null], [10576, 11372, null], [11372, 11803, null], [11803, 12553, null], [12553, 13390, null], [13390, 13407, null], [13407, 13931, null], [13931, 14424, null], [14424, 14772, null], [14772, 15517, null], [15517, 15644, null], [15644, 16041, null], [16041, 16270, null], [16270, 16834, null], [16834, 17627, null], [17627, 17837, null], [17837, 18075, null], [18075, 18552, null], [18552, 19156, null], [19156, 19428, null], [19428, 19838, null], [19838, 20233, null], [20233, 20959, null], [20959, 21492, null], [21492, 22036, null], [22036, 22648, null], [22648, 23230, null], [23230, 23984, null], [23984, 24557, null], [24557, 25153, null], [25153, 25426, null], [25426, 25854, null], [25854, 26476, null], [26476, 26853, null], [26853, 27416, null], [27416, 27747, null], [27747, 28350, null], [28350, 29015, null], [29015, 29254, null], [29254, 29586, null], [29586, 30320, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30320, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30320, null]], "pdf_page_numbers": [[0, 180, 1], [180, 312, 2], [312, 443, 3], [443, 962, 4], [962, 1230, 5], [1230, 1560, 6], [1560, 2135, 7], [2135, 2995, 8], [2995, 2995, 9], [2995, 3407, 10], [3407, 3509, 11], [3509, 3641, 12], [3641, 4316, 13], [4316, 4847, 14], [4847, 5312, 15], [5312, 5552, 16], [5552, 6251, 17], [6251, 6981, 18], [6981, 7339, 19], [7339, 7474, 20], [7474, 7726, 21], [7726, 8012, 22], [8012, 8246, 23], [8246, 8361, 24], [8361, 8737, 25], [8737, 9184, 26], [9184, 9597, 27], [9597, 9729, 28], [9729, 10576, 29], [10576, 11372, 30], [11372, 11803, 31], [11803, 12553, 32], [12553, 13390, 33], [13390, 13407, 34], [13407, 13931, 35], [13931, 14424, 36], [14424, 14772, 37], [14772, 15517, 38], [15517, 15644, 39], [15644, 16041, 40], [16041, 16270, 41], [16270, 16834, 42], [16834, 17627, 43], [17627, 17837, 44], [17837, 18075, 45], [18075, 18552, 46], [18552, 19156, 47], [19156, 19428, 48], [19428, 19838, 49], [19838, 20233, 50], [20233, 20959, 51], [20959, 21492, 52], [21492, 22036, 53], [22036, 22648, 54], [22648, 23230, 55], [23230, 23984, 56], [23984, 24557, 57], [24557, 25153, 58], [25153, 25426, 59], [25426, 25854, 60], [25854, 26476, 61], [26476, 26853, 62], [26853, 27416, 63], [27416, 27747, 64], [27747, 28350, 65], [28350, 29015, 66], [29015, 29254, 67], [29254, 29586, 68], [29586, 30320, 69]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30320, 0.0056]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
ecb9b54c56b57964f6da8bf38166e08faa1e390d
A Game of Persistence, Self-doubt, and Curiosity: Surveying Code Literacy in Digital Humanities Elli Bleeker¹, Marijn Koolen¹, Kaspar Beelen³, Liliana Melgar⁴, Joris van Zundert¹, and Sally Chambers² ¹Huygens Institute for Dutch History and Culture ²Ghent University ³Alan Turing Institute ⁴Utrecht University The interpretation of “code” and its role in the digital humanities has been a topic of debate ever since the first experiments in “humanities computing” in the late 1950s. Even some years ago, the question whether humanities students need to know how to code was sincerely provocative. At present digital humanists seem to agree that knowing how to code is relevant, if not essential for digital humanities research. However, there is a lack of agreement on what the community means by “code literacy”, and as a result the efforts to improve code literacy among students and researchers are dispersed. This paper presents the, to our knowledge, largest ever survey on code literacy and related questions in the field. We expound the results and analysis of the first two overarching research questions of the four that the survey aimed to tackle: 1) What are the definitions and interpretations of code literacy across humanities disciplines?; 2) How important is code literacy as part of digital humanities scholarship? We explain what according to the survey answers are the most important elements or dimensions of a definition of “code literacy”. Among these are the different levels of competence and the varying importance of code literacy across disciplines. The paper concludes by discussing the implications of these findings, opening up questions for the debate on academic curricula regarding digital humanities. Keywords: code literacy; pedagogy; survey; computational thinking; coding skills; humanities computing/programming; concept analysis 1 Introduction “Should humanists learn to code?” Less than a decade ago this question would have ignited quite a controversy in the field of digital humanities (DH). Today, the consensus is that a certain level of code literacy is preferred. Instead of arguing whether code literacy deserves to be part of DH’s skill set, the debate has moved on to discussing what it means, exactly, to be code literate. How we define code literacy appears a Rorschach test: our definition depends on “our values, our experiences, our skills, our visions of the future as it pertains to technology, to computers, to communications, to information” (Beverly Hunter, cited in Dobberstein 1993: 430). The somewhat confused state of definitions and discourse is not much helped by a proliferation of adjacent, overlapping, and vaguely defined literacies such as “computer literacy” (Dobberstein 1993, Tafazoli et al. 2017), “digital literacy” (Spante et al. 2018), “media literacy” (Potter 2010), and so on. After mapping the use of the term “computer literacy” between 1965 and 1985, Kenneth King concluded half-jokingly, that this type of literacy is “widely regarded as an impossible term to define; however, whatever it is, it is important for every student to have it” (King 1985: 20). Even those opposing the promotion of computer literacy, such as Douglas Noble – “What has apparently convinced an entire population that something as vague and worthless as computer literacy is essential to their lives?” – acknowledge that the community has “unusual difficulty arriving at a suitable definition.” (Noble 1984: 602,607) It does not help that a definition is inevitably influenced by the technological developments of the time. Over the years, various researchers engaged with the topic of code literacy, presenting valuable insights. Their observations remained, however, limited to their own experiences with teaching digital humanities courses or with working in a digital humanities context (e.g., Van Zundert, Antonijević, and Andrews 2020). During a recent roundtable at the DH Benelux 2019 conference in which we explored the perception of code literacy in the field, the participants expressed smart opinions without providing much empirical evidence (Melgar et al. 2019). The animated discussion was largely informed by anecdotal evidence, repeating phrases such as “in my experience, ...”, “in my department, ...”). To establish an informed and evidence-based definition of code literacy in the humanities we decided to ask the community for their opinion by means of a survey. In this paper, we report on the results of the questionnaire answers focusing, first, on the present-day definitions and interpretations of code literacy across humanities disciplines and, secondly, on the importance of code literacy as a part of DH scholarship. The survey used a questionnaire that was widely distributed and received a large number of responses from a demographically diverse audience. As a result, the results allow us to analyse how factors like background, education, and experience shape opinions about code literacy. Instead of providing a clear-cut definition of “code literacy” – which would suggest a unity of perspective that in reality does not exist – this paper highlights a number of aspects that are considered important across the DH community. Accordingly, the contribution of this paper lies in a community-inferred vocabulary and a practical framework for the discussion, evaluation, teaching, and promotion of code literacy in digital humanities. --- 1 Callaway et al. 2020 offer an overview of the topics discussed within the DH community over the past ten years; specifically paragraphs 3, 6, and 13 on the topic of “code”. A Google Scholar query for “code”, “coding”, and “literacy” shows that the discussion is ongoing and anything but limited to the humanities domain. 2 Background The interpretation of “code” and its role in the digital humanities has been a topic of discussion ever since the first experiments in “humanities computing” in the late 1950s. Few studies have, however, reviewed the history of this notion in a systematic way. This is surprising given that each generation of scholars was required to learn a different set of computer skills, so that perceptions of code (literacy) changed over the years. Since the discussion on code literacy (in the humanities) is distributed over a wide range of sources, reconstructing its history proved to be a difficult task. Academic publications contain various traces and clues. More recent debates tend to take place online, on platforms like Twitter, or blog posts and DH forums. In order to contextualise the findings of our survey, this section synthesises the historical discourse on code literacy on the basis of four shared themes: 1. The perceived role and potentially divisional nature of programming; 2. The importance of context for (teaching) code literacy. 3. The influence of code literacy on humanist thinking; 4. A distinction between multiple levels of literacy; The literature used in this section is selected following a combination of purposive (or judgement) sampling and snowballing (or citation tracking) (Wildemuth, 2009). Starting from several seminal works on the topic of DH and code literacy, such as Gold (2012a) and Vanhoutte, Nyhan, and Terras (2013), we pursued relevant citations in their bibliographies. And finally, we carried out several search queries via Google Scholar and the JSTOR archive with various combinations of the keywords “code”, “coding”, “computer”, “programming”, “digital humanities”, “humanities”, “electronic”, “literacy”, and “literate”. Although we will not claim that our literature review is exhaustive, we trust that it is representative. A final note on terminology: historical sources often refer to “computer literacy” rather than “code literacy”. For consistency we will use the term “code literacy” throughout. 2.1 In or Out: the Programming Divide A major theme in the discourse on code literacy is the question of programming. The Oxford English Dictionary (OED) defines “literacy” – rather narrowly – as “the ability to read and write”. If applied to code literacy, then, being code literate can be taken to mean “the ability to read and write code”. Incidentally, “code” is already an ambiguous term in the humanities, because it can refer to text encoding with XML. For example, being code literate had a different meaning in the 1960s, when few scholars owned a (in the parlance of the time) microcomputer, compared to the early 1980s, when the development of personal computers with a graphical user interface (GUI) took flight, and in the 1990s during the rise of the world wide web. Interestingly, educators have recently reported that students in STEM disciplines are no longer familiar with the directory structure, saving thousands of files on their desktop instead. This development could be explained by the omnipresence of the search function, which no longer requires users to know where their files are located (Chin, 2021). Although we will touch upon it briefly, the educational aspects of code literacy are not within the scope of the present article. As Gold notes, “some conversations, especially those on Twitter – a platform used extensively by digital humanists – are hopelessly dispersed and sometimes even impossible to reconstitute only a few months after they have taken place” (Gold, 2012b). or HTML as well as using programming languages to write computer programs. The question whether programming skills are a prerequisite for digital humanists has divided the field from the 50s until today (see Dobberstein 1993; van Zundert and Andrews 2017, ii86; Ide 1987). Especially when postulated as essential, proficiency in programming languages risks becoming a harmful criterion for exclusion. In the words of Douglas Rushkoff, “program or be programmed” Rushkoff, Earhart et al., for instance, notes an “increasing tension” between DH practitioners and DH theorists. As she sees it, both sides look down on the other: code literate scholars spurn those who are not (Ramsay, 2012), while simultaneously “builders” are not considered real digital humanists because “they haven’t theorized their work within the context of humanities and technology” Earhart et al., 2016. Annette Vee too, points to the intricate tensions that hide behind the deceptively obvious compound formed by code and literacy. Coming from the sociological oriented New Literacy approach, she compares the historical development of textual literacy to the ideological push for “mass programming movements” Vee, 2017, 152 and thus explains how a particular set of skills or capabilities can become a much coveted “literacy”. Depending on one’s perspective such a literacy may present itself as an individually empowering advantage or rather as a power game that excludes specific groups of people. In 2015, O’Sullivan, Jakacki, and Galvin conducted a survey to find out whether programming is indeed perceived as a barrier to join the DH community. The trigger for the survey, according to O’Sullivan et al., was the essay “Who’s in and Who’s Out?” Ramsay, 2013b, in which Ramsay famously declared that digital humanists should learn to code if they were to be part of the field. Although this essay caused quite a backlash and Ramsay later nuanced his statement, the sentiment appears to be persistent. The survey, which was completed by 96 respondents in the field, set out to study the “relevant attitudes in relation to [software] development within Digital Humanities projects” O’Sullivan et al., 2015, 142. Notably—and against the authors’ expectations—it was found that young scholars adhere less importance to programming than older generations: 40% of the respondents in the 25-35 age group considered programming as essential to DH work, in contrast to 60% in the category of 50 years or older. Senior scholars usually claimed to do the programming themselves, whereas younger scholars either collaborated with skilled developers or delegated coding to colleagues. Younger respondents also considered themselves less “technically proficient” O’Sullivan et al., 2015, 144. More recently, Callaway, Turner, Stone, and Halstrom (2020) examined “the gate keeping around coding and technical skills more broadly” by topic-modelling a corpus of 334 (English) definitions of DH found in readers and companions on the topic. In a way, their approach can be considered an indirect survey of the field. The authors found “a range of different views towards coding” Callaway et al., 2020, §13. Few contributions took up a “hard stance” (e.g., humanists should learn how to code); most discussed whether programming skills formed a threshold for participation in DH and --- 4 In a survey among 96 digital humanists, O’Sullivan, Jakacki, and Galvin found considerable disagreement on the definition of programming: “many respondents mention text encoding, particularly XML and HTML, as opposed to more sophisticated dynamic programming languages” O’Sullivan et al., 2015, 145. 5 It is worth pointing out that the term code literacy (or any of its related terms) rarely if ever appears in the computer science literature from this period. The explanation for this could be quite simply that “computer scientists have no more reason to refer to each other as ‘computer literate’ than say, physicists have in referring to their peers as ‘science literate’” Dobberstein, 1993, 430. if so, how to overcome related hurdles. Interestingly, the authors noted that—being early-career academics themselves—interpreting the topics proved more challenging (and rewarding) than understanding the technical aspects (of topic modelling). This led them to conclude that “less emphasis should be placed on digital competencies and more emphasis on the step of interpretation that comes after the use of the digital tool” (Callaway et al., 2020, 28). 2.2 The Computer in Context Indeed, most curricula concentrating on humanities and computing have taken code literacy to be more a broad than a narrow concept which usually comprises more than just programming skills. Early on, Zemsky contended that “we [historians] must ultimately invent a methodology—including computer programs—of our own, a methodology designed to cope with the peculiar kinds of evidence with which we deal” (Zemsky, 1969, 39). During a three-day workshop Teaching Computers and the Humanities Course (see also Hockey 1986 and Tannenbaum 1987), all participants—educators in humanities computing—agreed that when teaching computer skills “the example data must be from humanities disciplines” (Hockey, 1986, 228). And, Ide adds, they generally agreed that the computational methodologies need to be contextualised by traditional methodologies from the relevant humanities disciplines (Ide, 1987, 212). In “Technology and the Historian” Crymble discusses the changing role of technology in the history classroom. Taking a longitudinal perspective—broadly ranging from 1980s to the 2010s—he describes the different ways digital technologies have been embedded in the historical curriculum: the initial emphasis on coding and counting (i.e. statistical analysis) in 1980s, was replaced with a focus on using digital, web-based technologies for public history, making historical narratives and sources accessible to a wider audience. Together with the rise of digital humanities in the 2000s, historians increasingly perceived digital methods as “tools”. Historians were less concerned with teaching students how to code than to apply digital tools to historical data. The interest for digital tools was often applied and instrumental, ignoring the mathematics that underpinned many of methods and technologies used in digital history. Moreover, Crymble (2021) argues that “There was no overarching plan for a digital transformation of the historical classroom. Instead, it was a reactive intellectual space led by a few passionate individuals and largely ignored by the rest of the profession. These passionate few were not pushing a coherent agenda.” Reflecting on fifteen years of teaching computing to humanities students, Oakman found that mismatching “technique and subject” can result in outright “boring” courses for humanities students (Oakman, 1987, 232). To prevent this, he argues, humanists need to learn coding in combination with subjects of their interest. Furthermore, Oakman states that being code literate should include a knowledge of “the positive and negative effects of the Computer Revolution on the modern world (computers and society issues: privacy and government datafiles, automation and unemployment, robotics, technophobia, etc.)” (Oakman, 1987, 229). This definition foreshadows the current increase in interest for the sociological, political, and cultural effects of code on society, such as the emergence of the field of critical code studies (Marino, 2006), or the most recent volume of Debates in the Digital Humanities (Gold and Klein, 2019). More recent sources also emphasise the importance of situating code literacy in its historical and social context (Vee, 2013, 43), and “to provide practical technical skills within a humanistic framework” (Clement, 2012, 24). Andrew Piper, for instance, in a recent thread on Twitter, states that to be successful as a DH scholar one needs to be able to design and employ computational methods, but one also needs to be able to frame these in a theoretical, disciplinary context. From a methodological perspective, and also relating computational work to quantified approaches, Piotrowski and Fafinski (2020) draw similar conclusions. And Ramsay too, finds that programming offers a “methodology by which the artefacts of the human record are understood, critiqued, problematized and placed into dialogue with one another” (Ramsay, 2012 §6). 2.3 New Ways of Thinking Acquiring practical computer skills is often associated with learning new or different ways of thinking. This pertains to both reasoning—more formal, more explicit—and to attitudes towards the role of code in DH research. In 1972, Oakman claimed that “learning to program computers [is] excellent training in rigorous thinking and logical reasoning” (quoted in Oakman 1987, 227). And, when evaluating the graduate course Computing for Humanities taught at the University of Aberdeen in 1987, Holland and Burgess concluded that the official course objectives were “relatively modest”—learning the terminology, key concepts, components, and customs of computing, including some relevant packages and “very simple programming”—while unofficially, they hoped to bring about a change in the mind set of their students: we wanted to encourage a confident, independent, exploratory attitude amongst students which we hoped would eventually give them the confidence and assurance to work out for themselves how to use new machines, new software and how to apply them to new humanities problems. (Holland and Burgess, 1992, 268-269) Similarly, Ramsay found that learning basic programming skills alters one’s way of thinking: “what is gained when humanities students learn to think in the context of sophisticated computational tools is not only computational thinking, but also ‘humanistic thinking’”, because they learn to engage differently with traditional objects of humanistic study (Ramsay, 2012). Nick Montfort, too, sees programming “as a way of inquiring about and constructively thinking about important issues” (Montfort, 2015, 98) and argues that learning computational methods can augment, diversify, and improve humanistic methods. As iterated above, the sociological study of code literacy and programming present similar findings (Vee, 2017, 105; Burgess and Hamming, 2011). 2.4 Levels of Literacy Over the years, several studies have proposed a taxonomy of code literacy in an attempt to analyse the concept in a systematic manner. The general impetus for creating a taxonomy of code literacy is the acknowledgement that a single formal definition of code literacy is too restrictive since individuals may have “differing requirements” (Webster and Webster, 1985). In his analysis of the code literacy in the 1980s, Dobberstein discusses different classifications, each identifying a number of levels, domains, or components of expertise (Blau, 1985; Webster and Webster, 1985; Konar, Kraut, and Wong, 1986 among others). Notably, all taxonomies discussed by Dobberstein are hierarchical and define code literacy along an ascending scale of expertise. A disadvantage of such a scale is that it places “all users on the same continuum of expertise” (Dobberstein, 1993, 431). Consequently, the taxonomy still --- 6 See the discussion on Twitter: https://twitter.com/_akpiper/status/1430265428711034881 August 24, 2021. presents a rather narrow concept of code literacy. In line with the studies discussed in §2.2, Dobberstein argues for a “context-sensitive” approach to code literacy, which accords with domain-specific requirements and skills. 2.5 Summarising the Status Quo on Code Literacy The discourse on code literacy in the humanities can be characterised by several recurring themes. This is not to say that scholars agree on the themes: overall, we noted a general disagreement—and even confusion—in the literature about the (methodological) role and status of code and code literacy in the humanities. Another topic of discussion is the origins, motivations, and ramifications of the use of “code literacy” as a concept in the humanities. Several contributions mentioned the positive effect of (acquiring) computational skills on humanist research methods: scholars learn to be more formal and more explicit in their argumentation, and to have a confident, exploratory attitude toward computational methods. At the same time, scholars have emphasised the value of a humanist perspective on the sociological effects of code. There is no lack of opinions from (digital) humanist scholars and educators, but there is little actual research from humanists into the functioning of code and its (methodological) effects in the humanities domain or in our society as a whole. Without a vocabulary or a clearly defined framework, the debate risks getting stuck in confusion and misunderstanding, making it hard to find consensus on the topic. In the past, a number of studies have proposed a taxonomy of code literacy that distinguishes various levels, components, types of knowledge and skills. However, the abstract, one-size-fits-all taxonomies have proven problematic, as they fail to capture the disciplinary diversity in the humanities. A context-sensitive definition of code literacy, one which is firmly embedded into a humanities discipline, is suggested as more appropriate. The contextual aspect of code literacy is emphasised by several scholars: in this definition, being code literate would include understanding how computational technologies can be applied to humanist subjects and methodologies. Opinions differ on whether “applying” means using of existing tools, developing new software, or conceptualising of computational methods. Indeed, a large part of the discussion tends to focus on whether being code literate also implies being able to read and write code. Having (a lack of) programming skills is often felt to create a gap between DH scholars and practitioners. Some DH scholars consider programming an essential component of DH research, while others prefer theorising over practice. Both sides tend to perceive the other as exclusionary. In fact, a simple dichotomy between “programming skills” and “thinking skills” does not seem tenable in the case of code literacy in the digital humanities. As Burgess and Hamming (2011) and Vee (2017) argue, the distinction between programming as mere material labour and academic scrutiny as pure intellectual endeavour is a gross underestimation of the particular type of interpretation and understanding of information that code literacy affords, and, which cannot be executed without that particular literacy. 3 Methods The studies discussed in the previous section reveal the complexity of the topic of code literacy: there is no lack of valuable definitions and approaches, but all of these are based on theoretical considerations only or on limited empirical observation. Furthermore, what counted as code literacy in the 1970s is often outdated in the 2020s due to technological advances. We therefore decided to question individuals currently working in DH by means of a survey in order to create a more stable dataset on the topic of code literacy in digital humanities. As an instrument for collecting this data we decided upon an online questionnaire, to be distributed widely to anyone working in DH, including those working in the fields of GLAM (galleries, libraries, archives and museums) and LIS (library and information science), regardless of job type or background. Designing the questionnaire took seven months, from March until October 2020, and about 38 hours of discussion spread over 27 Skype calls. We were fortunate to have a diverse team that includes members with a background in information science and the social sciences, and took care to consult several experts in survey design. 3.1 Survey design By mapping the different forms of code literacy across DH disciplines, identifying problem areas and learning about existing approaches, we aimed to understand better the challenges involved with furthering code literacy in DH. We designed the survey around four research questions: 1. What are the definitions and interpretations of code literacy across humanities disciplines? 2. How important is code literacy as part of digital humanities scholarship? 3. How can we effectively approach the teaching and training of code literacy? 4. How can scholars (be supported to) incorporate code literacy in their research practice and methods? After providing personal information (background, age group, career stage, etc.), participants were asked four sets of questions, each set designed to address one of the research questions. We followed a mixed-method approach, using both qualitative (open-ended) and quantitative (multiple-choice) questions, which is appropriate for assessing complex topics (Timans et al., 2019). For example, participants had to indicate whether they were satisfied with their own level of code literacy on a Likert scale from 1 (dissatisfied) to 5 (highly satisfied). The respondents who indicated a number lower than 3 were asked to elaborate upon the reason why, in an open text field. All respondents were then asked whether they—using a yes/no question—would like to expand their level of code literacy. If they answered “no”, then they were asked in a multiple choice question to give at least one reason why (options ranging from “I don’t know where to start” to “I have no time”). This flexibility allows us to gain a more complete understanding of the respondent’s situation. As survey designers, we are all working in the field of DH and each of us has a different experience with learning to become code literate. Being vigilant of our own biases playing a role in the design of the survey, we opted for the “post-positivist” approach as the best way to design the survey (Ryan, 2006). A post-positivist approach --- 7 The authors would like to express their gratitude to their peers, with a special mention of Merisa Martinez, PhD candidate at the School of Library and Information Science in Borås, Sweden, for her insights and advice into effective survey design and analysis. 8 All Likert scale questions in the survey also included the option “Don’t know”. recognises that the biases of the parties involved will have some effect on the targeted audience, on the survey’s structure, and on the phrasing of the questions. For example, respondents may not share our perceived value of code literacy in research practices, so we should be vigilant about this and phrase questions as neutrally as possible. We also tried to be transparent about our intentions when we distributed the survey. A post-positivist method requires us to regularly reflect on what exactly we wanted to know and how we could best articulate a question. For example, we needed to provide a broad working definition of code literacy which a large number of respondents would recognise as useful. At the same time, we realised that not all respondents would agree with how we defined it. We concluded that any definition we would provide was open for debate. Therefore, we asked respondents to provide their own definitions as well. They were subsequently asked to indicate whether they would be using our definition or their own when answering the remaining survey questions. We realised that this setup could complicate the analysis of the survey results, as for each question we would need to keep in mind that a respondent may have a diverging interpretation of code literacy. However, it also enabled us to gain insight into the various understandings of coding and code literacy in the DH community. We could explore the relationships between a respondent’s definition and their answers to other survey questions, and thus step outside the framework of our own experiences. As said, the intended audience of the survey was the wider DH community, including but not limited to scholars, teachers, students, librarians, and developers. We distributed the questionnaire via contacts at international research institutes and universities, various international (digital) Humanities email lists—e.g., the Humanist Discussion Group, the mailing list of the Text Encoding Initiative (TEI), DM-L (Digital Medievalist)—several DH slack channels (e.g. DHtech), and social media (Twitter). We identified the intention and objectives of the survey in an accompanying email as well as on the survey’s home page. With 399 responses the questionnaire has reached a large audience, but we are mindful of a self-selection effect among the respondents. That is, a digital humanist might be more inclined to answer a survey on code literacy if they consider themselves (somewhat) code literate. Another (potential) pitfall is that a qualitative survey is by nature a self-report measure. It aims to capture how a respondent feels about, and relates to, the object of study, but it is entirely depended on what the respondent chooses to share. Self-report measures have therefore been said to be subjective and “less robust” (O’Brien and McCay-Peet 2017, 27). We prove the validity of our findings by triangulating the responses with, first, the findings from related studies (methodological triangulation) in section 2 and secondly, by distinguishing types of respondents based on background, career stage, etc. 3.2 Testing the Survey We first collaboratively drafted the questionnaire using Google Docs. An extended period followed in which we commented and made suggestions on how the questions were articulated, on the order in which they appeared, and whether they sufficiently addressed the underlying research questions. We then structured the questionnaire using the open source software LimeSurvey and tested it ourselves, discussing over the course of several meetings what did, and what did not, work. Once we agreed --- 9 See https://www.limesurvey.org/ The software was running on a secure server from the Humanities Cluster of the Royal Netherlands Academy for the Arts and Sciences. on the instrument as a team, we piloted the questionnaire by asking eight peers from a range of backgrounds (PhD students and postdocs in the Humanities, Cultural Heritage experts, and an Information Science professional) to test it and provide feedback. Specifically, we asked the testers to check the different “paths” through the questionnaire (depending on their answers to a certain question, a respondent would get a different follow-up question), to see if the questions were clearly phrased, and to inform us about the average time it took them to complete the questionnaire. Based on their feedback we adapted the instrument into its final form and distributed it over the aforementioned channels. 3.3 Preparing Response Data for Analysis In the first round of analysis (on which we report in this article) we focus on the responses to questions related to RQ1 and RQ2 – the definition and relevance of code literacy – and the personal information provided by the respondents regarding their background, career level, etc. We analysed the results following an inductive method, as we had no hypothesis as point of departure, but a set of research questions. The qualitative (open-ended, free text) questions were coded using open coding (Corbin and Strauss, 1990) that was triangulated by several members of our team in order to strengthen the validity of our coding. To give but one example: research question 7 (RQ7) asked all participants to define code literacy in a free text box. Three members of our team each coded the responses to this research question and subsequently categorised their coding. Examples of such code categories are “Conceptual aspects of code”, “Aspects of literacy”, or “practice”. We then merged our codes into one Axial list, and recorded the responses to RQ7 using this axial coding. Next, the other members of our team used the Axial list to code the same data. In between, we held regular meetings to discuss the codes and consolidate our views on tricky categories. We further analysed the data using Jupyter notebooks. The results of this analysis are discussed in section 4. 3.4 Privacy The landing page of the survey informed the respondents of the privacy aspects regarding the handling of their responses, which also included the names, affiliations and email addresses of the survey creators. We also informed respondents that their responses will not be shared in raw form beyond the creators of the survey. The questionnaire was anonymous, meaning that we do not store any identifying information about the responses, such as IP address. The only exception to this was the option for respondents to leave their email address to be contacted for a follow-up interview. The questionnaire included a number of questions on personal information, such as job type, academic background, gender, career stage, and country. We follow the data management guidelines of the KNAW and the European Union and store the data for a period of five years, to allow our analysis to be reproducible. The survey has been reviewed according to the GDPR and the Utrecht University guidelines for research. 10 Azungah 2018. See also https://conjointly.com/kb/deduction-and-induction/ involving human subjects.\footnote{For the GDPR, see: \url{https://gdpr-info.eu} For the Utrecht University guidelines, see: \url{https://www.uu.nl/en/research/research-data-management/guides/handling-personal-data#anonymise}} ## 4 Results In this section we discuss the responses related to the two research questions addressed in this article: the definition and relevance of code literacy. ### 4.1 Demographics of Respondents The questionnaire started with a series of demographic questions to get an understanding of who completed the questionnaire and therefore also of potential over and under-representation of certain groups within the DH community. We made the question about the respondent’s gender an open text field to allow them to express their gender in the way they saw fit. As the vast majority filled either “female” or “male”, we coded the responses using four groups: female \((n = 166\) or 42\%), male \((n = 208\), or 52\%), other \((n = 7\), or 2\%) and unspecified \((n = 25\), or 5\%). The latter corresponds to respondents who left the field empty. From this we observe that female and male sexes seem to be equally represented in the sense that the respondents are not predominantly male or predominantly female. For gender, as for other demographic aspects, we cannot make any claims as to how representative this distribution is for the DH community, as it is hard to determine the boundaries of this community. The majority of respondents work at universities and research institutes. We asked for the country of their current affiliation (in the case of multiple affiliations, we asked them to select one), using a controlled list of country names, see Figure 1. Just under half of all respondents are from Europe \((n = 190\) or 48\%), with another large group \((n = 106\) or 27\%) being from North America. Not all respondents filled in an affiliation and corresponding country \((n = 85\) or 21\%). This puts some limitations on the representativeness of this survey, as Oceania, South America, and especially Africa and Asia are underrepresented (or not at all as in the case of Africa). Any claims we make should therefore be considered to apply more to Europe and North America than to the DH community more broadly. We also asked respondents about their current role(s) in DH scholarship, and provided a list of roles (e.g. academic, developer, librarian, administrative personnel, student) as well as a free-text box. Respondents could select and/or enter any number of roles. The free form roles overlapped partially with the list of provided roles, so we coded them into five main groups: researcher, student, IT specialist (including developer, software engineer and technical support staff), information specialist (which includes archivists, librarians, curators, documentalists and data specialists) and other (including administrative personnel, editors, publishers and designers), see left side of Figure 2. The majority of respondents consider themselves researchers \((n = 301\text{ or } 75\%)\). The other two main groups, not surprisingly, are IT specialists \((n = 76\text{ or } 19\%)\) and information specialists \((n = 77\text{ or } 19\%)\). Most groups overlap substantially with the researcher group. For instance, 57\% of IT specialists \((n = 44\) are also researchers. The other groups are more distinct, with only 19\% of information specialists also having a role as IT specialist (vice versa is 20\%). Altogether this suggests we managed to reach people with a diverse set of roles in the DH community. Next, we asked respondents about the number of years of experience they have working with the field of Digital Humanities (see right side of Figure 2). Note that this does not necessarily correspond to their number of years of experience within their current roles, as some may have been a researcher or developer for decades but only started working in DH less than a year ago. Figure 1: Distribution of respondents across countries. Figure 2: Distribution of respondents across roles and career phases in DH. Almost half of all respondents has 9 or more years of experience in DH and less than a quarter, fewer than 2 years. It is difficult to establish how representative this is of the entire DH community. Our guess is that the more experienced contingent of the community is somewhat over-represented, as they are more likely to have come across our survey via the channels we used. Respondents also have a wide range of academic backgrounds (see Figure 3), with large groups of respondents having a humanities background, specifically in History, Language and Literary Studies, or Textual Scholarships. (The latter includes Book History, Paleography, Scholarly Editing and Textual Criticism.) There are also many respondents with backgrounds in Computer Science and Library and Information Science. ![Figure 3: Academic backgrounds of respondents, where they could check multiple backgrounds from the ones we listed as well as add additional backgrounds in a free text field. The table lists only the backgrounds checked by at least 5% of the respondents.](image) 4.1.1 Overlapping categories and significance testing Before moving on to discussing how respondents define code literacy and answered further questions, we discuss ways in which we can meaningfully compare sub-groups of respondents along different demographic dimensions. To test the observed differences for statistical significance, we need to consider different tests for the variables that were coded with mutually exclusive categories - such as gender and career phase - and those that were coded with overlapping categories, like role or disciplinary background, where respondents could select multiple answers. The former type of variables were tested using a one way ANOVA with Tukey Honest Significant Difference (HSD) post-hoc tests (Sachs 2012, Ch. 7). For the latter type of variables, we consider two options. One is to make as many groups as there are combinations of answers. E.g. all respondents who indicated to have both a role as researcher and as information specialist are in a separate group from those who selected only researcher or only information specialist. In this way, no respondents belong to multiple groups, but we can still compare the groups for differences in how they define code literacy or answer other questions. There are two drawbacks to this approach. One is that it creates many groups, each with only a small number of respondents, resulting in many comparisons of pairs of groups, which are often too small to establish statistical significance. The other issue is that even though this step establishes distinct groups, they can be hard to interpret. How should we interpret the difference between the group of respondents who are researcher, information specialist and IT specialist against the group who are researcher and information specialist, the group who are researcher and IT specialist, and those who only selected IT specialist, information specialist or researcher? The other option we consider is to compare all respondents who selected a particular role or background, against who did not, using a $\chi^2$ test. This has the disadvantage that we do not directly compare groups, but the advantage of more clearly interpretable test hypotheses and outcomes. E.g. are respondents with a role as IT specialist more or less likely to include writing code as part of the definition of code literacy than those who do not have this role? In the analysis below, we use the latter option. That is, we use the $\chi^2$ test on the group with a particular attribute or response and the complement of that group. 4.2 Definitions and Interpretations After the demographic questions, we asked respondents to provide their own definition of code literacy. As this was a required question, all 399 respondents provided a definition. Below are a few examples of typical definitions given by the respondents: “The ability to understand and write code and to use it to achieve some research goal.” “The ability to read, write and use code.” “Knowledge and experience of solving problems through the use of programming skills” But there are also many definitions that are more elaborate and rich: “I’d say there’s informal and formal code literacy. Formal literacy is what you find with colleagues in the sciences and engineering. There’s a strong emphasis on style, standards, and efficiency. Tests and quality control are required. Then there are the programming historians, the self-taught, and highly pragmatic types. It’s just amazing that it works at all. The code is an odd pastiche of cut and pastes from Stack Overflow. It’s a game of persistence, self-doubt, and curiosity. I mostly work with informal code literacy to give people the practical skills of reading error messages and documentation, finding helpful solutions to problems, and making something that works but is by no means pretty or professional.” To be able to analyse and compare these definitions, we coded them using open, axial, and selective coding methods (Corbin and Strauss [1990]). The open coding step was done individually by three of the authors. In the axial coding step, each of the annotators had already created a hierarchy for their own codes, so we compiled a list of all the hierarchical codes and derived a single hierarchy of codes. The overall hierarchy consists of five basic aspects, each with several sub-aspects. The basic aspects are Communication, Code Type, Contextual Understanding, Level of Competence and Other, see Table [14]. Apart from the five basic aspects, we added a code for responses that do not resemble a definition but, according to us, seem like a response to a different question. An example of this “I have none experience whatsoever, but I would like to change that.” This is the case for 23 responses. One respondent only filled in a hyphen, possibly to be able to move on to the next question without providing any definition. Further in the survey, we included a working definition of code literacy, and asked respondents if they wanted to answer the remaining questions using our definition or --- 14 The full hierarchy of aspects, including the scope notes of each, is provided in Appendix A. their own. We included this in case respondents were uncertain or not satisfied with their own definition. Our working definition was: Code literacy has to do with different levels of ability to recognise, interpret, and use code; not necessarily being able to create it yourself. In this context, code can refer to encoding (e.g. XML or MPEG) as well as program code (e.g. Python or R) to operationalise a process in concrete steps and actions. <table> <thead> <tr> <th>Code</th> <th>Description</th> <th>Scope note</th> </tr> </thead> <tbody> <tr> <td>COM</td> <td>Communication</td> <td>Knowing how to communicate about code with others, either as a coder yourself, or with someone who codes for you, including communication about purpose, workings, role and implications, as well as teaching code literacy.</td> </tr> <tr> <td>CT</td> <td>Code Type</td> <td>Whether the code explicitly refers to a specific type of code, either 1) code as encoding of documents or 2) performative code for e.g. processing of data.</td> </tr> <tr> <td>CU</td> <td>Contextual Understanding</td> <td>Understanding that code sits in a context and how it relates to 1) research and operationalising research questions, 2) it’s possibilities, limits and biases, 3) the culture and attitudes toward code, 4) the ecosystem around code of ethics, privacy, security, maintenance, documentation, versioning, licensing, practices, platforms and software, and 5) other aspects of context.</td> </tr> <tr> <td>LOC</td> <td>Level of Competence</td> <td>Literacy is divided into 7 different levels of competence, from 1) understanding the basics of what code is and does, to being able to 2) read code, 3) write basic code, 4) modify existing code, 5) review code, 6) create and package code from scratch, and 7) understanding of the theory of computation and coding paradigms. Separate from those levels there is a sub-category for definitions that mention there are multiple levels of competence.</td> </tr> <tr> <td>OTHER</td> <td>Other aspects</td> <td>Any code literacy aspects not covered by the above four aspects.</td> </tr> <tr> <td>MIS</td> <td>Misinterpretations</td> <td>Responses that are not definitions but answers to a different question.</td> </tr> </tbody> </table> Table 1: The top level aspects of our axial coding of the respondent’s definitions of code literacy. Figure 4: Fraction of definitions that include a main definition aspect (left) and that include sub-aspects of Level of Competence for all 399 respondents. The distribution of aspects given in respondents’ definitions is shown on the left side of Figure 4. The vast majority of respondents mention at least one of the levels of competence (LoC) as part of the definition of code literacy, such as the following two definitions: “the ability to recognize, understand and write codes of any kind, e.g. markup languages, programming languages, annotations or any other kind of symbolic, rule-based/formalized means of communication” “the ability to read code to comprehend the purpose of functions, their syntax, and what output results; the ability to interact with code to improve its functionality, either in terms of efficiency/clarity or in terms of research purpose; the ability to write (individually or collaboratively) new code” Only around 28% mention code type (CT) or contextual understanding (CU). Communication (COM) is rarely mentioned (less than 4%). We note that our working definition did not include any aspects of contextual understanding (although it hints at operationalising research processes), but aspects of it were mentioned by at least 20% of respondents, in each of the 12 disciplines, with more than 20 respondents. There are significant differences between disciplines in how many respondents mention any aspects of Contextual Understanding. Looking at the number of coded aspects in each definition, we found that most respondents mention one, two or three aspects (30%, 34% and 20% respectively), with the remaining 16% of definitions mentioning between four and seven aspects. “being able to write and deploy code when you encounter problems that code could help you deal with; being able to read code of others; being able to read pseudo-code and series of formulae in publications; knowing how to use tools that support code development and implementation (version management tools, Pypi, etc.); being familiar with terminology used in discussing code and coding issues.” We found that IT specialists mention more aspects in their definitions than non-IT specialists (2.8 versus 2.2 aspects, $\chi^2$ with $p = 0.002$), but career phase has no significant impact. 4.2.1 Level of Competence Level of Competence (LoC) is by far the most mentioned aspect in the code literacy definitions. Across gender categories, career phases, roles and disciplinary backgrounds, we find no significant differences in how often LoC is mentioned. This suggests that at least some sub-aspects of LoC are core elements of code literacy on which there is a broad consensus. Moreover, it reflects that discussions on the role of programming are still central to defining code literacy (Dobberstein, 1993) even though respondents differ when it comes to prioritising competences. Zooming in on the sub-aspects of Level of Competence, shown on the right side of Figure 4, reveals that the main aspects that respondents agree on are the ability to read (N = 198 or 50%) and write code (N = 222 or 56%). Digging further into the read and write aspects of definitions, we noticed that there is a substantial group (n = 77 or 19%) who mention reading code as a core competency of code literacy, but not writing code. This group of definitions is not associated with any specific disciplines. Many of these definitions emphasise that at a minimum, those involved in DH research should be able to read and get the gist of what a piece of code does, without necessarily being able to write such code. Examples include: “Being able to read and understand, not necessarily write, code” “Understanding of how code works. Not necessarily how to code something specific yourself, but you... are able to understand how code is built and how to read it”. On the other hand, 101 respondents (25%) mention writing code but not reading code, indicating that for another substantial group, the best or most logical approach to learning to code is by doing. Examples are: “Ability to write/correct code that runs successfully, eventually” “The ability to write practically usable code in some computer programming language” “be able to write code to solve problems” Overall, these results suggest that the DH community is still largely split on the question if code literacy implies possessing programming skills (Ide, 1987; Ramsay, 2013b; Rushkoff, 2010). While a majority supports the contention that literacy means writing of code, a still substantial group of respondents would disagree, foregrounding other ways of engaging with programming languages. This prompted us to think about the different ways in which code literacy can be taught or how different types of learning materials can be offered. For some groups of students and scholars, or within some curricula, it might make sense to focus first and foremost on how code relates to aspects of their discipline, how researchers translate between research questions and following the logic of code, thereby emphasising how code fits in the research process, while for others, the best way might be to start with writing straightaway. For instance, people who already have some experience writing basic code, might improve their skills (e.g. with additional elements or a different language or paradigm) more efficiently through writing. Alternatively, the nature of research in some disciplines is more directly translatable to computational thinking and breaking down the process into procedures with explicit steps, making it more relevant to learn writing small bits of code from the start. While for other disciplines, the connection between their research process and code might be more complex, in which case, learning about the role and possibilities of code might require focusing on its relevance and potential, and engaging with examples of code that have been successfully applied in their discipline. The other aspects in the levels of competence are mentioned by only few respondents, and represent more advanced perspectives on code literacy. 4.2.2 Contextual Understanding and Code Type Next, we zoom in on the two other main aspects mentioned regularly in definitions, Code Type and Contextual Understanding. First, since only 28-29% of respondents mention these aspects, we want to know if these respondents differ from the others in terms of demographics, background, literacy or career phase. Among the groups of roles we created, IT specialists more often also think about other aspects (code type and contextual understanding). We assume that IT specialists are most involved in working with code, and as noted above, they have more elaborate definitions than most others. For example, this definition is one of the most “dense” in terms of elements (given by an academic researcher): “\textit{I distinguish:} - a basic understanding of the principle of coding and the affordances and limitations for one’s discipline, necessary to collaborate with a computer scientist or software developer - the capacity of executing predefined rules in a learning environment - mastering a computer language and knowing how to write code to execute particular actions - knowing several computer languages that are relevant to a specific domain and the specificities of each one of them - being all round with computer languages that cover the whole eco-system of digital humanities, so that you can create an infrastructure”. As well as this one provided by an IT specialist: “The ability to conceive and implement a software solution to solve a specific problem in an adequate and efficient manner. The ability to understand and augment solutions implemented by others. The ability to use common infrastructure tools needed in software development (text editors or IDEs, version control).” Only 28% of respondents made explicit reference to type(s) of coding in their definitions (i.e., encoding or processing). When they did, some mentioned both, some mentioned only one. Textual Scholars are more likely to mention encoding than non-Textual Scholars (16% versus 6%, $\chi^2$ with $p = 0.009$) and respondents who have a background in software Development are more likely to mention processing than respondents who do not (58% versus 24%, $\chi^2$ with $p < 0.001$). Within Contextual Understanding, the possibilities, limitations and biases are more likely mentioned by respondents with a background in Archival & Museum Studies (27% versus 8%, $\chi^2$ with $p = 0.003$), Library and Information Science (17% versus 8%, $\chi^2$ with $p = 0.003$) or Linguistics (21% versus 8%, $\chi^2$ with $p = 0.002$). Given the emphasis on contextual understanding in the (academic) literature (Clement, 2012; Dobberstein, 1993), the relatively low number of mentions of this aspect (mostly coming from IT specialists and not researchers) was somewhat of a surprise. It suggests that the community could benefit from expanding its notion of code literacy beyond discussing competencies, or programming languages, and instead perceive literacy as the understanding of how code interacts with/relates to the humanistic framework in which it operates (Clement, 2012). 4.2.3 Code Literacy Level of Respondents As people with several years of coding experience may have different definitions than people who have never directly interacted with code before, we asked respondents to score themselves on a five-point scale as to how code literate they consider themselves to be, given their own definition of code literacy (see left side of Figure 5). Figure 5: Distribution of self-reported code literacy levels among all 399 respondents (left), and of main aspects of definitions per code literacy levels. The five-point scale has the following labelled levels: No literacy, Beginner, Intermediate, Advanced and Expert (ConcordiaUniversity, 2011). The distribution of the responses is close to a normal distribution, with the most frequent level being *Intermediate*, and frequencies tapering off towards the extremes. The *Experts* are the smallest group with \( n = 46 \) respondents. There is an association between gender and literacy level. Women are more likely to consider themselves at *beginner* level than men (32% versus 18%, Tukey HSD with \( p = 0.008 \)), while men are more likely to consider themselves at *advanced* (11% versus 29%, \( p = 0.001 \)) and *expert* level (4% versus 17%, \( p = 0.001 \)). There is also a clear association between role and literacy level. *Information specialists* are less likely than non-information specialists to consider themselves *advanced* (10% versus 23%, \( p = 0.02 \)), while, as is to be expected, IT-specialists less frequently have *no literacy* (0% versus 15%, \( \chi^2 \) with \( p = 0.001 \)) or be at *beginner* (11% versus 28%, \( p = 0.003 \)) or *intermediate* (21% versus 33%, \( p = 0.05 \)) level than non-IT specialists, but more likely to be at *advanced* (37% versus 17%, \( p < 0.001 \)) or *expert* level (32% versus 7%), \( p < 0.001 \). There are also significant relationships between disciplinary background and code literacy level, which we argue has implications for how code literacy is best taught within different disciplines. Respondents with a background in History, are more often than non-Historians at the level of *no literacy* (20% versus 8%, \( \chi^2 \) with \( p = 0.001 \)) or *beginner* (32% versus 21%, \( p = 0.02 \)), and those with a background in Language and Literature are less likely to be at *expert* level than those without (7% versus 14%, \( p = 0.03 \)), while those with a background in Linguistics are less likely to be at *beginner* level than those without (5% versus 27%, \( p = 0.007 \)). These differences are visualised in Figure 6. ![Figure 6: Code literacy level of respondents with a background in History, Language and Literature, and Linguistics.](image) One possible explanation for the differences between these humanities disciplines is the different research methods and data they use. Linguistics has a long history of computational approaches to analysing speech and textual utterances using quantitative methods ([Hajic, 2004](#)), so perhaps the step of translating methods and techniques to code is relatively small and requires no significant change in mode of thinking. Whereas for historians, the established methods of archival research and making associative connections through close reading of heterogeneous documents are perhaps less easily translated into quantifiable and computational steps. It may therefore be more beneficial for historians to see examples of how computational processes have been translated into a recognisable part of historical research. This can help both the understanding of the potential and limitations of code for their discipline, as well as introduce “thinking in code” or “thinking through code” as a relevant mode of thinking. Interestingly, although historians and language and literature scholars score themselves lower on code literacy than many others, they form the two largest disciplinary groups in this survey, each represented by over 140 respondents. This could potentially be related to representation and self-selection bias (see Section 4.1). Again, as expected, respondents with a background in Software Development or Computer Science are rarely at the level of no literacy, beginner or intermediate, and are more often at the level of advanced or expert. The level of code literacy of respondents is clearly associated with the definitions (right side of Figure 5). The majority of respondents at all levels agree that Level of Competence is part of the definition, but Code Type and Contextual Understanding are mentioned more frequently by respondents with higher levels of code literacy. Respondents at Advanced level are significantly more likely to mention Code Type (41%), specifically Processing, than those at No Literacy (8%, Tukey HSD with $p = 0.001$), Beginner (19%, $p = 0.003$) and Intermediate (34%, $p = 0.04$) levels. The same goes for Contextual Understanding, which respondents at No Literacy level are significantly less likely to mention (12%) than respondents at Intermediate (23%, Tukey HSD with $p = 0.005$), Advanced (46%, $p = 0.02$) and Expert (33%, $p = 0.003$) levels, and those at Beginner level are significantly less likely to mention it (28%, $p = 0.04$) than those at Expert level. From this we speculate that more direct experience with code provides a richer vocabulary to talk about code, a wider perspective on how code relates to the research questions and processes, and that writing and interacting with code brings people into contact with the larger ecosystem of coding languages, interpreters, data formats, versioning and management of code, aspects of ethics and privacy and other elements. This does not mean that Contextual Understanding is a more advanced concept that should be taught at a later stage in the curriculum. We argue that more experienced coders are more aware of the importance of the context in which code is created and used. This also ties in with the earlier observation that foregrounding Contextual Understanding would help establishing a more nuanced and broader perception of code literacy in the DH community. To conclude our analysis of the definitions of code literacy, there are three important dimensions to consider when talking about code literacy: Level of Competence is the main dimension that respondents across all career phases, roles, disciplinary backgrounds and levels of code literacy mention, while Code Type and Contextual Understanding are more recognisable to those with some experience in using or creating code. The different perspectives of what code literacy is, also suggest that there is no one-size-fits-all approach to teaching it, but that it makes sense to differentiate between different disciplines and roles. We will address the question of how to teach and incorporate code literacy in curricula in a follow-up article. 4.3 Importance of Code Literacy We turn now to the second research question. How important is code literacy as part of DH scholarship? After giving their own definition of code literacy, we asked respondents to consider how important they think code literacy is for digital humanities. --- 15 We leave out percentage and P-values for readability. The distribution of responses is shown on the left in Figure 7. Only 5 respondents indicated that they did not know (1%), and only 21 (5%) answered not important at all, which means 373 respondents (93%) consider code literacy to be at least somewhat important. There are differences across background disciplines. The disciplines with the highest percentage of respondents who consider code literacy not important at all are Media Studies (12%), History (10%) and Cultural Studies (9%). So across all disciplines in our survey, the vast majority agree code literacy has a place in DH scholarship. Historians are significantly more likely to consider code literacy as not important at all compared with non-historians (10% versus 3%, $\chi^2$ with $p = 0.004$) and significantly less likely to consider it very important (12% versus 25%, $p = 0.002$). Respondents with backgrounds in Archival and Museum studies are significantly less likely to consider code literacy crucial than those without these backgrounds (7% versus 36%, $p = 0.006$). On the other hand, linguists are significantly more likely to consider it crucial than non-linguists (55% versus 32%, $p = 0.007$). Again we see the strong contrast between historians and linguists. Their difference in perceived importance might be related to their difference in levels of code literacy. Apart from these three humanities disciplines (represented in Figure 8), the only other disciplines with statistically significant differences are computer science and software development, which, not surprisingly, score lower on somewhat important (7%, $\chi^2$ with $p = 0.02$ and 0% with $p = 0.01$ versus 23% respectively), but higher on very and crucial (respectively 60% and 77% versus 31%, $p < 0.001$). Keeping in mind the possible self-selection bias mentioned in Section 4.1, the results suggest there is a broad consensus across a wide range of disciplines, within the humanities and beyond, that code literacy is important to many DH scholars. 5 Discussion This paper analysed the results of a questionnaire on the definition and importance of code literacy in digital humanities. Our coding and analysis of the 399 definitions of code literacy reveals a complex, multi-layered and multi-faceted perspective on code literacy; from the basic skills of reading, interpreting, writing and using code, to publishing and reviewing code, to the different contexts in which code is created and used – the research context, the ecosystem of hardware, software, and communities and conventions – as well as the societal context relating to ethics, privacy and bias. We found that these different aspects are related to the experience levels of respondents, with more code literate respondents providing more elaborate and nuanced definitions. This suggests that code literacy is not a single level that one reaches at a certain point, but is a set of skills that people continuously improve and extend within a particular context. The order in which these skills are best learnt and developed is not necessarily related to the level of literacy of the respondents that mention them. The context of research in which one translates between research questions and methods on the one hand, and how that can be expressed, modelled or performed via code on the other hand, is mentioned most by respondents who identify as more code literate. However, we argue that this contextual understanding should be learnt early on, as it is the most directly relevant aspect for integrating code into DH scholarship. Many coding practitioners in the (digital) humanities, humanities researchers, and sociologists of code literacy seem to have gravitated to a similar attitude, although they differ of opinion on how to define this “contextual understanding”. A large part of the respondents think that code literacy is important for DH scholarship. However, the particular distributions we see regarding who thinks code literacy is important should also inspire questions about the reasons why we consider code literacy to be important. It seems safe to assume though, that the perception that code literacy is important in DH is on the rise. And yet, many respondents are dissatisfied with their own level of code literacy. This prompts the question as to whether there is currently a gap in academic curricula regarding digital humanities? And if so, would enhancing code literacy fill this gap? And what should such teaching look like according to our respondents? Questions such as these will be addressed in follow-up publications in which we will analyse and discuss the remaining questions and responses of the questionnaire. References ConcordiaUniversity. Computer skills: Levels of proficiency, 2011. URL [https://www.concordia.ca/content/dam/concordia/services/hr/docs/employment/guides/proficiency-computer-skills.pdf](https://www.concordia.ca/content/dam/concordia/services/hr/docs/employment/guides/proficiency-computer-skills.pdf). Appendix A - Code Literacy Definition Aspects <table> <thead> <tr> <th>Code</th> <th>Description</th> <th>Scope note</th> </tr> </thead> <tbody> <tr> <td>LOC</td> <td>Level of Competence</td> <td>Different levels of competence of interacting with code, from recognising, reading and basic writing, to increasingly complex aspects of these. Competencies thereby have different levels.</td> </tr> <tr> <td>LOC-1</td> <td>Recognize</td> <td>The ability to recognize code as code, having an understanding of the general purpose of code, e.g. that code is used to tell a processor what actions to perform (processing) or to structure the content of a document (encoding).</td> </tr> <tr> <td>LOC-2</td> <td>Read and apply</td> <td>The ability to understand the syntax of a coding language and to read a specific bit of code and figuring out what it does or what it conveys. This can include understanding data structures (like arrays, hashes, ...), databases, api’s, and control-flow aspects like loops and conditionals. This also includes knowing how to apply it, e.g. by changing a few parameters or variables.</td> </tr> <tr> <td>LOC-3</td> <td>Write</td> <td>The ability to write syntactically correct code in a specific language (processing or encoding). Correct means that it can be executed without error, but says nothing about its quality or organisation. Use this when a definition only speaks of ‘writing code’ without specification. We assume this level is also applied in definitions that say something like ‘knowing at least one programming language’.</td> </tr> <tr> <td>LOC-4</td> <td>Repurpose (copy-paste/libraries), edit/modify</td> <td>The ability to identify a relevant piece of existing code or code libraries and incorporating it in ones own code, or adjusting it to ones own purpose and context (beyond changing a few parameters or variables and running the repurposed code as a tool).</td> </tr> <tr> <td>LOC-5</td> <td>Review/evaluate</td> <td>The ability to review code to decide if it corresponds to its creator’s intended use and purpose, the ability to evaluate the quality of code</td> </tr> <tr> <td>LOC-6</td> <td>Create, test, improve and deploy</td> <td>The ability to create code from scratch to solve a concrete task (either to process data or to encoding documents)</td> </tr> <tr> <td>LOC-7</td> <td>Paradigms and theoretical aspects of computation</td> <td>The understanding of various programming paradigms and how they differ in terms of modelling and extending core programming concepts. This refers to aspects like the differences between object-oriented and functional programming, or declarative versus procedural. Theory of computation includes references to computability, P vs. NP complete, ...</td> </tr> <tr> <td>LOC-M</td> <td>Different levels of literacy</td> <td>Use this for answers that explicitly refer to different levels of code literacy. For example: “basic literacy is being able to read and understand code, intermediate literacy is being able to write code, advanced literacy is being to write high quality code.”</td> </tr> </tbody> </table> Table 2: Code literacy definition aspects related to Level of Competence. <table> <thead> <tr> <th>Code</th> <th>Description</th> <th>Scope note</th> </tr> </thead> <tbody> <tr> <td>CU</td> <td>Contextual Understanding</td> <td>Understanding that code is not created or used in a vacuum, but is used within a wider context of the research it is part of, cultural attitudes and conventions and other technological aspects like environments and platforms.</td> </tr> <tr> <td>CU-1</td> <td>Transforming research problems to computation</td> <td>Understanding how a research question or problem can be divided into increasingly smaller questions or problems, which can be more directly addressed through code or software. People mentioning this aspect might focus more on code or on the research question.</td> </tr> <tr> <td>CU-2</td> <td>Potential, limits, biases</td> <td>Understanding how specific technologies and tools are relevant to a research problem, what they can and cannot do, when it’s appropriate to use them, and how they can create, propagate or exacerbate biases in the data. This is not about social/cultural aspects of code, but about the context of the coder and their intention.</td> </tr> <tr> <td>CU-3</td> <td>Attitude</td> <td>“The attitude towards learning to code and using and creating code, as well as to its value for and role in research. This includes things like being open-minded and confident in accepting that you maybe don’t know what you’re doing or that you make a lot of mistakes. This also includes attitudes around autonomy and ownership as well as attitudes towards finding help (e.g. don’t be afraid to ask for help or copy from others).”</td> </tr> <tr> <td>CU-4</td> <td>Code ecosystem</td> <td>The understanding and ability to handle aspects of ethics and privacy, security, to maintain, document and version of code, understanding licensing and open-source aspects. Knowing about the software ecosystem around code (code editors, repositories) and knowing how the web works (servers, domains, sites, etc). Knowing how to find help (in documentation but also on forums and via colleagues).</td> </tr> <tr> <td>CU-5</td> <td>Other references to context</td> <td>Any aspect of code context that is not covered by the first four aspects</td> </tr> <tr> <td>CT</td> <td>Code Type</td> <td>Whether the code explicitly refers to a specific type of code, either code as encoding of documents or as in processing of data.</td> </tr> <tr> <td>CT-1</td> <td>Encoding</td> <td>A form of coding that segments and structures the content of a document (be it text, image, audiovisual or data) and identifies elements of interest</td> </tr> <tr> <td>CT-2</td> <td>Processing</td> <td>A form of coding that is performative, in the form of processing instructions that can be run to perform some task.</td> </tr> <tr> <td>COM</td> <td>Communication</td> <td>Being able to communicate about code and collaborate with others through code (includes a.o. code sharing, pair programming, co-designing specifications, co-designing a methodology), this could also include teaching and explaining of code to others.</td> </tr> </tbody> </table> Table 3: Code literacy definition aspects related to Contextual Understanding, Code Type, Communication and aspects not related to code literacy.
{"Source-Url": "https://journal.dhbenelux.org/journal/issues/004/article-1-Bleeker.pdf", "len_cl100k_base": 15693, "olmocr-version": "0.1.50", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 65767, "total-output-tokens": 20753, "length": "2e13", "weborganizer": {"__label__adult": 0.0009813308715820312, "__label__art_design": 0.0021076202392578125, "__label__crime_law": 0.0009751319885253906, "__label__education_jobs": 0.08880615234375, "__label__entertainment": 0.0002818107604980469, "__label__fashion_beauty": 0.0003948211669921875, "__label__finance_business": 0.0006227493286132812, "__label__food_dining": 0.0010061264038085938, "__label__games": 0.001377105712890625, "__label__hardware": 0.000927448272705078, "__label__health": 0.0008540153503417969, "__label__history": 0.00231170654296875, "__label__home_hobbies": 0.0002956390380859375, "__label__industrial": 0.0004901885986328125, "__label__literature": 0.005462646484375, "__label__politics": 0.001068115234375, "__label__religion": 0.0010833740234375, "__label__science_tech": 0.017730712890625, "__label__social_life": 0.0006780624389648438, "__label__software": 0.011260986328125, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.0005707740783691406, "__label__transportation": 0.0008335113525390625, "__label__travel": 0.000461578369140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 86535, 0.03532]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 86535, 0.75711]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 86535, 0.92105]], "google_gemma-3-12b-it_contains_pii": [[0, 2061, false], [2061, 5765, null], [5765, 9351, null], [9351, 13409, null], [13409, 17320, null], [17320, 20770, null], [20770, 24310, null], [24310, 27669, null], [27669, 31472, null], [31472, 34972, null], [34972, 37935, null], [37935, 39060, null], [39060, 41573, null], [41573, 45303, null], [45303, 47600, null], [47600, 51242, null], [51242, 54691, null], [54691, 57366, null], [57366, 60252, null], [60252, 63898, null], [63898, 65248, null], [65248, 68718, null], [68718, 72120, null], [72120, 75403, null], [75403, 78273, null], [78273, 79065, null], [79065, 82476, null], [82476, 86535, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2061, true], [2061, 5765, null], [5765, 9351, null], [9351, 13409, null], [13409, 17320, null], [17320, 20770, null], [20770, 24310, null], [24310, 27669, null], [27669, 31472, null], [31472, 34972, null], [34972, 37935, null], [37935, 39060, null], [39060, 41573, null], [41573, 45303, null], [45303, 47600, null], [47600, 51242, null], [51242, 54691, null], [54691, 57366, null], [57366, 60252, null], [60252, 63898, null], [63898, 65248, null], [65248, 68718, null], [68718, 72120, null], [72120, 75403, null], [75403, 78273, null], [78273, 79065, null], [79065, 82476, null], [82476, 86535, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 86535, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 86535, null]], "pdf_page_numbers": [[0, 2061, 1], [2061, 5765, 2], [5765, 9351, 3], [9351, 13409, 4], [13409, 17320, 5], [17320, 20770, 6], [20770, 24310, 7], [24310, 27669, 8], [27669, 31472, 9], [31472, 34972, 10], [34972, 37935, 11], [37935, 39060, 12], [39060, 41573, 13], [41573, 45303, 14], [45303, 47600, 15], [47600, 51242, 16], [51242, 54691, 17], [54691, 57366, 18], [57366, 60252, 19], [60252, 63898, 20], [63898, 65248, 21], [65248, 68718, 22], [68718, 72120, 23], [72120, 75403, 24], [75403, 78273, 25], [78273, 79065, 26], [79065, 82476, 27], [82476, 86535, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 86535, 0.11654]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
e9580f16dc2030bbab5a058f85394a93cdeb952d
Implementing Legba: Fine-Grained Memory Protection Authors: Sheffield, Sowell, and Wilson Fine-grained hardware protection could provide a powerful and effective means for isolating untrusted code. However, previous techniques for providing fine-grained protection in hardware have lead to poor performance. Legba has been proposed as a new caching architecture, designed to reduce the granularity of protection, without slowing down the processor. Unfortunately, the designers of Legba have not attempted an implementation. Instead, all of their analysis is based purely on simulations. We present an implementation of the Legba design on a MIPS Core Processor, along with an analysis of our observations and results. Follow this and additional works at: http://openscholarship.wustl.edu/cse_research Part of the Computer Engineering Commons, and the Computer Sciences Commons Recommended Citation Implementing Legba: Fine-Grained Memory Protection Authors: Sheffield, Sowell, Wilson Abstract: Fine-grained hardware protection could provide a powerful and effective means for isolating untrusted code. However, previous techniques for providing fine-grained protection in hardware have lead to poor performance. Legba has been proposed as a new caching architecture, designed to reduce the granularity of protection, without slowing down the processor. Unfortunately, the designers of Legba have not attempted an implementation. Instead, all of their analysis is based purely on simulations. We present an implementation of the Legba design on a MIPS Core Processor, along with an analysis of our observations and results. Abstract—Fine-grained hardware protection could provide a powerful and effective means for isolating untrusted code. However, previous techniques for providing fine-grained protection in hardware have lead to poor performance. Legba has been proposed as a new caching architecture, designed to reduce the granularity of protection, without slowing down the processor. Unfortunately, the designers of Legba have not attempted an implementation. Instead, all of their analysis is based purely on simulations. We present an implementation of the Legba design on a MIPS Core Processor, along with an analysis of our observations and results. Keywords: Legba, protection, PLB I. INTRODUCTION Protection is a long-standing problem in operating systems safety. With the growing popularity of mobile code, the proliferation of third-party operating systems extensions, and the clear dangers of running these extensions in a privileged context, there is a definite need for better protection mechanisms. Recent work on Microsoft’s Singularity[8] project rely on software isolated processes to provide safety properties. Type-safe languages do provide strong software protection mechanisms for safety. However, given the frequency of defects in software systems resulting in vulnerabilities, we suggest that some additional layers of protection may improve the situation. Brian Bershad observed that while software protection mechanisms provide the most flexible and applicable protections, “software mechanisms usually rely on hardware as a foundation to ensure their own integrity, while changes in hardware protection are usually controlled and limited through software mechanisms.” [10] We agree whole-heartedly with this observation, and believe that hardware mechanisms should be designed and evaluated for providing fine-grained encapsulation of light-weight objects. Legba[2] is a hardware design for fast, fine-grained memory protection. Unfortunately, the original designers have not yet attempted an implementation. While the design appears to provide the protection mechanisms required, the design needs validation beyond simulation. We have attempted to implement Legba in VHDL to better understand the design space and to validate the Legba architecture. II. RELATED WORK The problem of providing fine-grained memory protection in an efficient manner is not new. Most current processors provide a partial solution by attaching permissions to pages in the Translation Look-aside Buffer (TLB). This has two disadvantages. The lines in a TLB refer to pages. Thus, permissions can only be as fine-grained as the page size. For most processors, this is 4KB. Secondly, the TLB is shared among all processes on the system. We must either add context tags to the TLB lines, or perform a complete TLB flush on a context switch. This solution is also usually limited in the number of entries. Because the TLB is frequently fully associative and must be on the processor core (and critical path), the size of the TLB is minimized to reduce the access time and avoid lengthening the processor critical path. Given the inadequacy of current processor architectures for providing fine-grained protection, research efforts have been made in new architectures for protection. A. Software Solutions A number of software solutions have been proposed to provide the flexible protections required. While all of these are useful in their own space, they still rely on hardware for a foundation. One method of safely running untrusted code is to use an interpreter. If implemented correctly, this does provide complete safety. However, interpreted code is slow and inefficient. Some studies have found that interpreted code is up to 100x slower than native machine code. [13] Finally, while a correct interpreter provides complete safety, proving the correctness of a large piece of software such as an interpreter is usually challenging. Proof-Carrying Code [14] is a much more promising line of research. A proof of safety is embedded in a program. Before loading the program, the system checks the proof against the code and determines whether it is safe to run. This provides the best of both worlds: the mathematically demonstrated safety with the speed of native code. Unfortunately, creating these proofs has proved to be a very difficult problem, and an automated safety prover is still out of reach. Type-safe languages [15,9,8] provide another approach. The language does not provide constructs for violating the type-safety of object encapsulations. Each component is software isolated from every other object. The major problem here is in ensuring that each program is from a type-safe compiler, and that they each adhere to the software protection scheme. Additionally, there is the non-trivial problem of validating the correctness of a large, complex compiler. Software Fault Isolation isolates faults in one module from impacting another module. One very effective method is NOOKS, outlined in [16]. NOOKS uses a combination of automatically instrumented code and memory management techniques to prevent a defective kernel module from corrupting the rest of the kernel. However, the memory management techniques rely on modifying the TLB to restrict access to memory. While this prevents a defective module from inadvertently trampling kernel memory by mistake, no attempt is made to prevent the module from loading a new TLB and accessing at will. In the words of the NOOKS designers, “We trust kernel extensions not to be malicious, but we do not trust them not to be buggy.” This is a valuable design space, but we are interested in assurance from malicious code, not merely defective code. B. Palladium Palladium [17] attempts to solve the same problem using existing segmentation and paging hardware in the Intel x86 class of processors. Since these processors support a large (8,192) number of segments, each with its own privilege level, as well as near arbitrary size, this seems like a promising option at first. Unfortunately, at closer examination there are some significant disadvantages. These segment protection levels are limited to 2 bits, or four ordered levels. This is adequate for a very shallow ordered hierarchy. However, for an arbitrary protection matrix, we need to be able to support non-hierarchical orderings of arbitrary depth. In short, a complete subject/object access control matrix [18]. Palladium also has a significant protection domain boundary crossing penalty of 142 cycles. While this is not an insurmountable difficulty, it is possible to reduce the protection domain boundary crossing penalty. C. Itanium Itanium provides a Protection Key Register (PKR) [19]. This PKR is a 16-entry fully-associative cache. Since the cache lines do not have context tags, the cache must be flushed on every access. Since the PKR is separated from the TLB, TLB cache lines can be shared between different protection domains, with different access permissions. However, since the PKR still ties protection to individual pages, we have no sub-page protection. Furthermore, the PKR is on the processor critical path. This is probably the driving force behind the choice of PKR size; a large, fully associative cache would add significantly to the processor critical path length. D. Protection Look-aside Buffers A major innovation in protection management comes with protection look-aside buffers (PLBs) [4]. With PLBs, we remove all protection information from the TLB and place it in a new construct, the PLB. If we use virtually addressed L1 cache, then the TLB is no longer necessary for L1 cache operations, and can be moved off core. This allows us to expand the TLB and increase the associativity, since it is only used during L2 or lower accesses; the latency can be masked in the lower level memory access. Another major benefit is that the PLB is much smaller than the TLB. Unfortunately, the PLB still suffers from all of the classical drawbacks of a TLB. It is still fully associative, and still sits on the processor critical path. Finally, the classical PLB suffers one more major disadvantage: the need to perform an associative lookup of an address without knowing the base address of the object whose attributes are cached. Thus, when we wish to check the permissions associated with an address, we need to use the address to look up the associated memory object in a fully-associative cache using object base and limit. Having this lookup in the L1 cache is expensive. E. Mondrian Memory Protection Mondrian Memory Protection [7] incorporates the concept of the PLB and adds an additional optimization, the sidecar. Sidecars are cached protection bits applied to any register capable of containing an address (including the program counter). When first a register is used to address memory, the memory permissions are not known. However, once the permissions are known, they are cached in a sidecar to the register. Future accesses check the sidecar first; if the address object is still the same, the sidecar permissions are used instead of consulting the PLB for permissions. This removes the PLB from the processor critical path. In the case of mondrian memory protection, the sidecar contains: - the base of the last memory access - the limit of the last memory access - the rights of the last memory access As no context tags are included, the sidecars must be flushed on each context tag. PLB entries do include context, allowing us to share cache lines between different protection domains using different permissions. Mondrian memory protection still uses a classical PLB, and suffers from the major disadvantage of the PLB: the associative lookup. Overview of Legba Legba provides fine-grained memory protection by combining features of many disparate memory protection schemes. Mondrian memory protection comes closest to meeting our needs, except for the associative lookup. That is, we have an address, A, which falls between some object O’s base B and limit B+L. Then we should apply the permissions of object O to accesses to address A. The problem becomes: how do we associate address A with object O? In mondrian, the address is checked in L1 cache. In the case of Legba, we no longer store address objects in the PLB – in L1 cache – by base and limit, but by Object Identifiers (OIDs). This also allows us to add a level of indirection from the data cache lookup to the actual protection information, permitting us to share cache lines between multiple protection domains. To add this indirection, Legba supplies a data cache with the usual data, tags, and stats fields; as well as an OID used to index a second cache, the Protection Key Cache (PKC), with protection information. To avoid extending the processor critical path, the PKC is placed in the pipeline stage following the data cache. To facilitate the sharing of data cache lines between different protection domains, the PKC is indexed by both the OID from the cache and the current Protection Domain Identifier (PDID). See Figure 1 [2]. Legba uses four permission bits: e(X)ecute, (S)witch, (R)ead, and (W)rite. X, R, and W are self-explanatory; S is discussed in more detail under the new instructions for Legba. Legba also includes sidecars à la mondrian memory protection. However, in the case of Legba, the sidecars do not contain the full base+limit, but just an OID. During data lookup in the cache, an authoritative OID is returned. Sidecar content is validated by comparing OIDs. Finally, Legba also adds two new instructions: Branch-Linked-Locked (BLL) and Switch-Load (SL), for managing the current PDID. Figure 1: Cache and PKC relationship (Picture copyright A. Wiggins [2]) Figure 2: PKC location within Pipeline (Picture copyright A. Wiggins [2]) Implementation Our implementation is based on the MiniMIPS [11] implementation of the MIPS core. The MiniMIPS is a standard 5-stage (IF|ID|EX|MA|WB) pipelined architecture with support for exceptions, pipeline stalls on branches, and no cache or virtual memory (and thus no TLB). We found that Legba adapted very well to implementation on the MiniMIPS. In our implementation, an attempt to access memory in violation of policy generates an exception. In the MiniMIPS, an exception is handled by an immediate jump to a global exception handling address. Our implementation does not actually include the exception handler, which we consider to be outside our scope. The pipeline stall on branch required one minor change to our BLL/SL instruction implementation, detailed below. The lack of a TLB means that we cannot mask the latency of the associative lookup of the OID by performing this in parallel with address translation. Since most processor architectures today do have a TLB, we consider this to be an unusual case that does not invalidate the assumption that the latency can be hidden with a parallel lookup. In general, we believe that the Legba architecture can be added to almost any processor architecture without great difficulty, although we only offer the anecdotal evidence of our own implementation as proof. Pipeline Changes Our implementation made very few changes to the pipeline structure. We added several new components to the pipeline. The major addition is the Protection Key Cache, or PKC. See Figure 2 [2]. Also, since the MiniMIPS had no cache, we also added the instruction cache and data cache to the pipeline, for a modified Harvard architecture. In most architectures, this is unnecessary, since a separate instruction and data cache are commonly included. We also found it necessary to add some additional registers to the pipeline. We added a PDID (Protection Domain ID), to represent the current execution context, and a Protection Key Table Register, which will be discussed under the memory hierarchy. Finally, it is worth detailing exceptions in the MiniMIPS. When an exception is raised, all stages prior to the exception are filled with NOPs, and a signal is sent to the PC register. The next address to be loaded in the instruction fetch stage will be the address of the exception handler. For our implementation, we used address 0xFFFF0000. B. Instructions Added Legba requires the addition of two new instructions: BRANCH-LINKED-LOCKED (BLL) and SWITCH-LOAD (SL). To simplify our implementation, we changed the BLL to a simple JUMP instead of a BRANCH-LINKED. That is, we do not save the old address on a stack, etc. Instead, we treat the BLL much more like a syscall or trap instruction. The exact semantics are as follows: On reading a Branch-Linked-Locked, the processor sets up for a normal JMP, passing the address to the MiniMIPS jump logic. The jump logic handles relative branch computation and stalling the pipeline. We also add an additional pipelined signal which passes through to the Instruction Fetch stage with the jump information. This signal notifies the IF stage that the next instruction must be SL. On load, we have additional logic we have added to the IF stage that checks the signal and the next instruction. If a BLL is not followed by SL, an exception is raised. Before executing the SL instruction, we verify that we have (S)witch permission to the object in which it resides. When executed, SL takes the OID of the object containing the SL and loads it into the pipelined PDID register. The actual validation of the X and S permissions are addressed in our discussion of the PKC component. It is sufficient to note for now that we must have X permission to the object in which both the BLL and SL reside, and S permission for the object in which the SL resides. One point not discussed in the original design of Legba is whether a naked SL should generate a permissions exception. We elected to allow this case. Thus, SL may be encountered in any point in the program. However, given that the only way this can happen is if we “fall through” from one object space to another, or by jumping to an SL by a normal JMP (where a BLL could be used), this seems to be mostly irrelevant. In summary: our implementation adds two instructions to the MiniMIPS instruction table, the BLL and SL. We also add logic to the IF stage to verify that SL follows BLL (component is_SL). Finally, we add logic to the ID stage to load the new PDID into the pipeline (component pdid_update). Since the PDID is only used in the ID and WB stages, we latch it in stage ID as a register, and pipeline it through to the WB stage as a normal pipelined signal. C. Sidecars Legba implements Mondrian-style sidecars, but instead of a base+limit identifier, we use an OID. Because of concerns in the caching components, we made our OIDs 16 bits. Our sidecar adds 19 bits to the storage for each register (see Figure 3). Note that we only have 2 bits of permissions, despite having 4 distinct protection bits. This is because we have two distinct types of registers. The PC is only ever used to execute or switch, never to read or write. The other general registers are only ever used to read and write, never to execute or switch. Thus, we can save 2 bits on each sidecar by only caching the relevant bits. Our implementation has the following semantics: 1. On context switch (SL instruction), all sidecar “valid” bits are cleared. 2. When a register is used in a lookup, real permissions are validated for that lookup in the cache and PKC. 3. As a side benefit to checking these permissions, we also route them back to the register used in the lookup, as a sidecar update. This update may also change the OID of the register. The implementation requires only a few changes to the existing MiniMIPS: add storage bits to the register, and “update” and “flush” lines to the register files. We also add pipelined signals to track which register is being used in a memory access, so that sidecar updates are routed to the correct register. This adds no logic to the top level, just an additional signal for the pipeline. The sidecar updates take the exact same form as the sidecar data itself, where the “valid” bit is used to assert an update. D. Memory Hierarchy Our cache architecture is a 4-way modified Harvard architecture. (See figure 4) While the addition of two more L1 cache components runs the risk of more frequent stalls, the original designers believe that PKC misses should be rare. We discuss this assertion further in our evaluation section. An important point not addressed in either the original Legba design or in our own implementation is cache coherence. Given that we now have 4 separate cache components in L1, we believe it would be most efficient to implement request bus snooping to maintain coherence. Additionally, bus snooping allows us to monitor changes to permissions in the lower level Protection Key Table (PKT), updating the relevant PKC and/or cache. The original Legba design also mentions the possibility of object re-numbering; this could be an acceptable way to implement this. <table> <thead> <tr> <th>OID (16)</th> <th>Perm. (2)</th> <th>Valid (1)</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> In our actual implementation, we completely ignore the issue of cache coherence and recommend this to future investigators. 1) L1 cache Our L1 cache scheme has 4 different components. However, the instruction cache and the data cache can be implemented by the same component, as can the two Protection Key Caches (PKC’s). Our actual cache was never intended to be synthesizable, and was implemented in behavioral VHDL with some delay statements. The problem of implementing cache has long been solved, and we use the behavioral VHDL to model a real cache. Our cache model is for n-way set associative cache. Since the MiniMIPS does not have virtual memory, it is physically addressed and physically tagged. We chose to make it 4-way set associative, but do not attach significance to this number. We also add an OID to each line, but all logic for selecting this OID is in the L2 cache, not L1. Our L1 performs simple caching only. We do encapsulate our L1 cache inside logic to handle an unexpected data dependency, explained in our evaluation section. This logic is not present in the original Legba design. Our PKCs are significantly more complicated. We encapsulate our actual PKC inside a set of logic to handle sidecar optimizations, pipeline stall management, and protection checking. A PKC check requires the following inputs: - OID (from cache), - sidecar from register, with OID, permission bits, and validity, - current PDID (from pipeline), - access type request (from the instruction, e.g., Read, Write) PKC outputs: - sidecar updates (routed to the relevant register) - exception line (on access violation) - stall (on cache miss and lower-level lookup) Our PKC also has one other output used in managing the data dependency mentioned above, detailed in our evaluation section. On a PKC lookup, we first check the cache OID against the sidecar OID. If they match, and the sidecar is valid, we can skip the PKC lookup and used the sidecar permissions. Otherwise, we must look in the PKC. The PKC is a set-associative cache, indexed by OID. We used the PDID as the tag in our lookups. Thus, indexing by OID returns a set of cache lines; using the PDID, we can do a parallel check of the set of cache lines, searching for the correct one. Since our lines are very small (PDID + permissions = 18 bits; adding replacement algorithm stats, a line is probably 20 bits), our parallel lookup is very fast, and we can afford to have a fairly large PKC. In either case (sidecar or PKC lookup), the protection information is routed with the requested access information to a small protection check component (permchk). This component issues the actual exception to the external pipeline. During a PKC miss, we need to look up the information from lower levels. However, the protection information is actually stored in a PKT in DRAM. This information is not stored in the same format as in our PKC. The PKT (described under DRAM) is a two-level hash table, so a lookup of protection from lower levels involves two distinct reads. We implemented this as a 3-state FSM within the PKC. Normally, the FSM is in the idle state. On a miss, we begin reading from the first hash table (READ1). After receiving the first read, we can read from the second level (READ2). The actual read is from L2, so we may have varying stalls, based on the availability of L2 and the hit rate within L2 for the PKT. Since the PKT lines (described under DRAM) are not in the same format as in L1 and L2 cache, a read from line will return protection information for potentially multiple protection domains. In our implementation, we discarded this information. An alternative implementation would be to store this information in adjacent lines within the PKC object set. We do not consider this particularly valuable. The line will still be cached in L2, so subsequent reads will not need to look to DRAM, and since OIDs may share sets if the PKC is small, we do not wish to evict one object’s protection in favor of another. In summary: our L1 cache is fairly standard. Our PKC is the most custom component in our implementation, and does 2 lookups on a cache miss, due to the 2-level PKT. 2) L2 cache Since we now have a 4-way shared L2 cache, we use a simple L2 controller to arbitrate access. Our arbiter (L2_controller) always gives automatic priority to the Figure 4: Legba Memory Hierarchy in the MiniMIPS architecture request from farthest along the pipeline. Since prior stages cannot advance even if they have the information from L2, this simplifies pipeline management. We used this memory architecture as a shortcut. We believe that a better implementation would involve a bus, and bus snooping by each caching component to detect updates to locally cached information. However, particularly in the case of the PKC, it is difficult to see how the PKC can recognize that a change to the PKT has been made. While indexing from the PKTR can be detected, the PKT is a 2-level hash table, and detecting a modification to the second level is difficult. As is typical, we have an address signal, a data bus for data transfer in or out, a set of input enable signals (chip, read, and write enable), and an output enable signal (data ready). Our L2 cache lines are shown in Figure 5. <table> <thead> <tr> <th>Data</th> <th>OID</th> </tr> </thead> </table> **Figure 5**: L2 cache line Our L2 cache contains two components not usually found at L2: the TLB and a new component, the Object Look-aside Buffer (OLB). Since the MiniMIPS has no virtual memory, we do not actually implement a real TLB. We did mark the point where it would exist in the L2 cache, and place a dummy “pass-through” component. Choosing to locate the TLB in L2 cache is possible because we envision Legba being implemented in an architecture with a virtually addressed L1 cache. Since protection has been moved out of the TLB and into the PKC, there is no longer any need to keep the TLB on-core, in the L1 region. We do implement the OLB. The OLB is a fully associative array of the available objects with base and limit. A lookup in the OLB takes an address and searches all object in parallel to return the OID in which the address resides. Since all cache lines within the L2 cache already contain an OID, only lookups to DRAM require searching the OLB. Thus, we mask the OLB latency behind the DRAM access latency. The remainder of L2 cache is fairly normal, non-associative cache. Once again, we have implemented this as behavioral VHDL using delay statements for simulation. No attempt is made to make the L2 cache data portion synthesizable. The sequence of operations in the L2 cache is as follows. Arbitration is left out of this sequence, as it should be clear without discussion. 1. A request comes in to read/write to some address, along with assertions on c_en and rd_en or wr_en. a. If the data is present, it can be read/written immediately. d_rdy is asserted. 2. On a miss, we perform two tasks in parallel. a. Index the address into the OLB, obtaining an OID for the data. (On an OLB miss, we generate an exception.) b. Issue the read to DRAM, waiting until DRAM is ready. Note: we assume write-back cache, so the data is read into L2 cache even on a write. 3. The data is copied into L2 cache. The stats are set, and the OID is appended to the data line. 4. Finally, any pending read/write is done, and d_rdy is asserted. Note that since both L1 and L2 cache have the same cache line format, the data and OID are returned on one big bus which is redirected directly into the relevant cache line. 3) **DRAM** Our DRAM is implemented in totally non-synthesizable behavioral VHDL with explicit delay statements. Our DRAM consists of a large array of words. We make no effort to do a realistic “progressive delay” in our DRAM, but just apply a constant penalty. The DRAM interface is very similar to the L2 interface. We have an address, a data bus, the usual trio of input enable signals, and an output enable signal. Stored in DRAM is our actual authoritative protection information, the PKT. Because the protection information is stored in memory, updates to the PKT can be made by the OS by simply updating the table in memory. The PKT is a two-level hash table where the first table is addressed by (hashed) OID, and the second level by (hashed) PDID. See Figure 6. **Figure 6**: Legba two-level hash table The base address of the PKT is stored in a new register, the PKTR. This register is used by the PKC to perform any lookups of permissions, but no other caching constructs need to be aware of the PKTR. The PKT top level is in cache-line sized rows. Since we assume that our OIDs will be sequentially allocated by the operating system, we use a simple 16-bit to 8-bit XOR fold for a hash. This should serve to reduce collisions, particularly if OIDs are allocated in a packed manner; that is, lowest available first. Each row in the top-level of the PKT has a number of “tries” for hash function collisions. In this initial prototype implementation, we take no steps to avoid hash table collisions beyond our trie limit. Within each top-level PKT entry, there is a set of OIDs and memory addresses. Each address points to the Protection Domain Hash Table (PDHT) for that OID. Within the PDHT, there are a set of PDIDs (one for each trie), and a set of 4-bit permissions associated with each one. The lookup process is actually driven by the PKC on a PKC miss. The lookup process is: 1. Read the cache-line sized row at \( \text{PKTR} + (\text{OID}_{\text{high}} + \text{OID}_{\text{low}}) \). 2. Perform a parallel check of each entry for the correct OID. In cases of multiple matches, use the first. (Since we use 0 for the OID of empty entries, this is possible.) 3. From the associated address \( A \), read the cache-line sized row at \( A + (\text{PDID}_{\text{high}} + \text{PDID}_{\text{low}}) \). 4. Perform a parallel check of each entry for the correct PDID. Use the associated bits. One useful trait is that we can make each of the second-level PDHT’s a separate object, and give read/write permission over that object to the owner of that object. E. Changes for Virtual Memory MiniMIPS does not have virtual memory, and no TLB. Because of this, much of the complexity and design points of Legba are unused. However, most modern processors do have some form of virtual memory. Legba is best suited to virtually addressed L1 cache. This allows us to remove the TLB from the processor critical path, and to perform OLB lookups in parallel with TLB lookups. Other architectures, while workable, have less ideal choices. For example, for a physically addressed L1 cache, we need to perform a translation step before a lookup. In this case, we would want our OLB to be at the top level, in the L1 cache. Since our OLB is fully associative and sized to hold all possible 16-bit values, this is a massive addition to the on-core processing. To implement Legba on a system with virtual memory, certain changes need to be made. First, the TLB should be real, located in the L2 cache as we have it placed. Second, all memory lookups need to be tagged with an Address Space Identifier (ASID). This is how most modern virtual memory capable processors differentiate different address space data in virtually addressed cache. [1] Thus, we add the ASIDs to the cache. This introduces the “synonym” problem. We now have differently addressed lines in a single cache that both refer to the same location in memory. Efficient ways of dealing with the synonym problem is an open area of research; most architectures avoid this problem by requiring each address space to be wholly distinct, and any shared memory is bundled into a special “global” ASID. However, one of the major advantages of Legba is that we can share cache lines between different protection domains – and different address spaces. Using this scheme now requires a different data line for each address-space view of the same data. In this case, Legba is best applied to intra-address space access control, not inter-address space access control. V. Evaluation During our implementation, we encountered several difficulties that were not addressed by the original designers. We also clarified some areas of concern, and can place reasonable constraints on several architectural properties. A. Timing Concerns 1) Critical Path Our implementation makes it clear that Legba does not significantly extend the critical path. First, in most processors, the critical path is established by the EX stage of the pipeline. Legba has no impact on the EX stage. Secondly, in the case of sidecar hits, the PKC has no impact on either the WB stage or the ID stage. With sidecar hits, the maximum path length of the PKC is a comparator (not equal, of the same width as the OID), 4 AND gates, 2 OR gates, an inverter and a latch. We find it improbable that any critical path could fail to exceed this. Thirdly, in the case where the critical path is established by the MA stage, our critical path is lengthened by a single AND3 gate. Finally, for the case where the critical path is established by the IF stage, the critical path is lengthened by only an inverter, a comparator (equals, of instruction width), and an AND gate. In all cases, we think it is clear that Legba does not significantly impact the processor critical path. Any timing impact from Legba will come from cache misses and pipeline stalls. 2) Frequency of Stalls Legba contributes to more frequent pipeline stalls in several ways. First, by adding an OID to the cache lines, Legba reduces the available size of the cache line. Likewise, the PKC takes valuable real-estate on the processor core and reduces the available space for cache. Reduced cache size will always lead to some level of increased miss rate. However, this miss rate cannot be determined from design, but requires experimental evaluation. Simply by adding two caches for protection information, Legba adds an additional source for cache misses and pipeline stalls. While we expect PKC misses to be rare compared to data cache misses, this cannot avoid increasing cache miss frequency. Indeed, since we now have 4 caches competing for access to L2, we can have increased stall times from L2 cache contention. 3) **Additional Memory Accesses** Aside from increased miss frequency, a cache miss in Legba is potentially more expensive. On a cache miss from the PKC, we need to index into a two-level hash table. Since this is potentially as much as two DRAM lookups, the stall time is greatly increased, particularly in view of the fact that other processors do not even have a PKC. In most cases, we think it is probable that L2 will contain the bulk of the top-level portion of the PKT, so we expect most PKC misses to require an L2 lookup followed by a DRAM lookup. We do not expect the protection information from the PDHT to ever be found in L2, since the only time it is loaded is when loading the L1 PKC. Thus, unless we flush from the PKC but not from L2, this information will always be found in the PKC, or loaded from DRAM. Fortunately, we expect PKC misses to be rare. We expect that there will be a limited number of active objects at any time. Since the PKC cache line is approximately 20 bits long, we can expect to hold a large number of object protection lines without substantial cost. Since we would expect normal use to include a small number of tightly clustered object, it should be rare to see many new objects accessed (from different protection domains) in rapid succession. Finally, the addition of sidecars removes the PKC from the critical path. In some cases, it is even possible for the sidecar to hit when the PKC would miss. **B. Unexpected Data Dependency** The original Legba design in [2] contains a flaw. When attempting to write to L1 data cache, we must first lookup the associated OID and send it to the PKC for permissions checking. However, the PKC is in the next pipeline stage! This problem is inherent in the Legba architectural model. For read operations, this is unimportant. We can read the data, discover that read access is unavailable, and generate an exception. The data is thrown away on exception, so there is no protection impact. For writes, we cannot complete the write until we have validated write permission. We earlier alluded to a data dependency problem in the data cache and PKC design. In our discussion of their implementation, this point was omitted for clarity. We developed a workaround for this problem by adding an instruction lookahead from the EX stage to the MA stage. When the MA stage receives a write operation, it uses a 2-state FSM to hold the write until the PKC in the next stage has signaled wr_ok. If the next instruction is a memory access, we stall the pipeline until the write permissions are resolved. One optimization that appears to be available is to perform the write immediately if we discover that the sidecar information is valid and allows writing. Since the sidecar information is validated by comparing the sidecar OID with the cache OID, it seems we can avoid the pipeline stall by checking this in the MA stage. This is not the case. Our data cache must first read the data to obtain the OID. Only after this read can we test the sidecar validity. Unless the processor critical path is more than double the time of a data cache access, we cannot both read the OID and write the data all in a single pipeline stage without noticeably lengthening the critical path. We consider this problem to have very limited impact on performance. Modern compiler technology is well able to handle re-ordering instructions to avoid known processor hazards. In this case, the only hazard is on a memory write followed immediately by a memory access. Unavoidable cases should be vanishingly rare. In those few cases that do apply, the penalty is only a single cycle stall. **C. Protection Key Table** The PKT as implemented has a limited multi-trie system for handling collisions. While our reasoning is that, due to the order of OID allocations, collisions in the PKT will be limited; we must consider the possibility that our expected usage patterns are incorrect. Large numbers of collisions would require extending the trie system. However, extending the trie system arbitrarily has the potential to make PKC miss time unbounded. Further, an operating system error has the potential to produce a circular trie list, locking the processor in a hardware memory walker loop. We consider this an unacceptable compromise. Additional experimental data is necessary before the PKT collision rate can be properly established. Significant additional hardware design would be required to manage extension of the existing trie system. **D. Object Look-aside Buffer** The OLB is effectively a form of very non-standard content addressable memory. In our implementation, we only have 16-bit OIDs; nonetheless, this requires 8 bytes per OID for base and limit, for a total OLB size of 512KB. Since the OLB is fully associative, searched in parallel on every lookup, this requires two comparators for each line, along with a large aggregation network. We consider the cost of this OLB to be excessive. The OLB is not a standard CAM, and CAM prices do not apply because of the two-comparator test. However, if we judge from existing CAM architectures, a megabyte of CAM is currently around $10. If we assume that doubling the number of comparators roughly doubles the cost, a rough ball-park figure for this component is $20. With any additional cooling capacity, a realistic total cost could be $30-$50. [12] While this sounds like a minimal cost, the consumer market has shown a trend toward considering cost above performance (and functionality) when purchasing memory. We consider it unlikely that consumers would be interested in this additional expense without significant demonstrable gain. VI. FUTURE WORK A. Experimental evaluation of PKT collisions Our analysis, like that of the original Legba designers, suggests that collisions in the PKT should be rare. We believe that some experimental analysis should be performed to determine the real extent of collisions. This requires developing a model of a hypothetical operating system using Legba as a basis for protection mechanisms. In the original Legba paper, the authors used user-level code and assigned each variable a different OID. We consider the impact of the operating system to be non-negligible, and we believe that objects must be more sanely delineated. For example, we could develop a system where a memory allocator controlled its own memory. Allocation requests enter through a call gate; some amount of memory is selected for allocation. That memory unit’s PKT contents are reassigned to provide read/write access to the requesting thread. Developing a design of an entire operating system around the concepts of fine-grained intra-address space protection is a non-trivial task, and a project we consider well worthwhile under its own merits. B. Per-user Object Look-aside Buffer One way to fix the problem of a large, fully associative OLB is to have smaller, per-process (or per-user) OLBs. We believe that this can substantially alleviate the problem by reducing the OLB size. To maintain separation among different processes, this requires flushing the OLB and reloading for each context switch. This is already necessary for virtual memory implementations, where the TLB is flushed and reloaded for each context switch. We anticipate that the OLB flush overhead can be shielded by the TLB flush overhead. C. Removing Object Look-aside Buffer Legba’s objects are really no different from ordinary segments. They have a base and a limit, and a set of permissions associated with them. The major distinction is that, when accessing segments, the user (via the compiler) must be aware of the segment being accessed. We believe it is possible to remove the OLB completely by making objects completely into segments. If the program must supply an OID (or segment ID!) on every memory access, this completely removes the OLB from the design. It also changes the semantics of the mapping by allowing objects to overlap. As Legba stands, they may not, since the OLB must be guaranteed to return a single OID for any address. This is still different from segmentation as applied to modern processors such as the Intel architecture. Segments normally have a set of limited hierarchical privilege levels. We would change this by supporting constructs allowing a completely arbitrary set of access control lists. This is a significant redesign of the original Legba scheme. We do not believe it is appropriate to simply graft this change onto Legba. On the contrary, we think a change of this nature should necessitate a complete redesign of Legba, because additional optimizations – or problems – may present themselves. VII. CONCLUSIONS Having implemented Legba, even on a limited architecture such as the MiniMIPS, we believe that this architecture represents a promising direction for research. With some of the limitations we have seen, we do not believe legba is ready for production use in a real microprocessor. However, continuing research is strongly indicated. In particular, we recommend that two directions toward future work be explored: First, there is a need to explore operating system design in an arena where fine-grained protection exists. Second, there should be a redesign of Legba where programs must present OIDs for object access. REFERENCES
{"Source-Url": "http://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1142&context=cse_research", "len_cl100k_base": 9519, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 35486, "total-output-tokens": 11215, "length": "2e13", "weborganizer": {"__label__adult": 0.0006566047668457031, "__label__art_design": 0.0008320808410644531, "__label__crime_law": 0.0006718635559082031, "__label__education_jobs": 0.0005345344543457031, "__label__entertainment": 0.00015270709991455078, "__label__fashion_beauty": 0.00029659271240234375, "__label__finance_business": 0.0004801750183105469, "__label__food_dining": 0.0005445480346679688, "__label__games": 0.0012493133544921875, "__label__hardware": 0.0254974365234375, "__label__health": 0.0008425712585449219, "__label__history": 0.000606536865234375, "__label__home_hobbies": 0.00022661685943603516, "__label__industrial": 0.0017547607421875, "__label__literature": 0.0003323554992675781, "__label__politics": 0.0004467964172363281, "__label__religion": 0.0008983612060546875, "__label__science_tech": 0.372314453125, "__label__social_life": 7.575750350952148e-05, "__label__software": 0.009979248046875, "__label__software_dev": 0.5791015625, "__label__sports_fitness": 0.0004665851593017578, "__label__transportation": 0.001712799072265625, "__label__travel": 0.00030732154846191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47822, 0.01145]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47822, 0.38026]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47822, 0.91311]], "google_gemma-3-12b-it_contains_pii": [[0, 1122, false], [1122, 1849, null], [1849, 5904, null], [5904, 11578, null], [11578, 15652, null], [15652, 20991, null], [20991, 25416, null], [25416, 29581, null], [29581, 34855, null], [34855, 40685, null], [40685, 46451, null], [46451, 47822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1122, true], [1122, 1849, null], [1849, 5904, null], [5904, 11578, null], [11578, 15652, null], [15652, 20991, null], [20991, 25416, null], [25416, 29581, null], [29581, 34855, null], [34855, 40685, null], [40685, 46451, null], [46451, 47822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47822, null]], "pdf_page_numbers": [[0, 1122, 1], [1122, 1849, 2], [1849, 5904, 3], [5904, 11578, 4], [11578, 15652, 5], [15652, 20991, 6], [20991, 25416, 7], [25416, 29581, 8], [29581, 34855, 9], [34855, 40685, 10], [40685, 46451, 11], [46451, 47822, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47822, 0.02008]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
5810e91f54765db1d9cdbe49b83247140f9c784a
[REMOVED]
{"Source-Url": "https://hal.inria.fr/hal-01070140/PDF/apf14_final.pdf", "len_cl100k_base": 8381, "olmocr-version": "0.1.47", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 47958, "total-output-tokens": 12578, "length": "2e13", "weborganizer": {"__label__adult": 0.000614166259765625, "__label__art_design": 0.0019369125366210935, "__label__crime_law": 0.003330230712890625, "__label__education_jobs": 0.0011501312255859375, "__label__entertainment": 0.00013720989227294922, "__label__fashion_beauty": 0.0002665519714355469, "__label__finance_business": 0.0009703636169433594, "__label__food_dining": 0.0005197525024414062, "__label__games": 0.0009565353393554688, "__label__hardware": 0.0012693405151367188, "__label__health": 0.0009937286376953125, "__label__history": 0.0006303787231445312, "__label__home_hobbies": 0.0001773834228515625, "__label__industrial": 0.0007634162902832031, "__label__literature": 0.0008258819580078125, "__label__politics": 0.0015993118286132812, "__label__religion": 0.0005679130554199219, "__label__science_tech": 0.21337890625, "__label__social_life": 0.00016415119171142578, "__label__software": 0.0263519287109375, "__label__software_dev": 0.7421875, "__label__sports_fitness": 0.0002231597900390625, "__label__transportation": 0.0008769035339355469, "__label__travel": 0.00023818016052246096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47960, 0.03225]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47960, 0.43489]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47960, 0.88953]], "google_gemma-3-12b-it_contains_pii": [[0, 1042, false], [1042, 3506, null], [3506, 6351, null], [6351, 9442, null], [9442, 12839, null], [12839, 15490, null], [15490, 16911, null], [16911, 19997, null], [19997, 22966, null], [22966, 25948, null], [25948, 28979, null], [28979, 32459, null], [32459, 35752, null], [35752, 38720, null], [38720, 42132, null], [42132, 45441, null], [45441, 47362, null], [47362, 47960, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1042, true], [1042, 3506, null], [3506, 6351, null], [6351, 9442, null], [9442, 12839, null], [12839, 15490, null], [15490, 16911, null], [16911, 19997, null], [19997, 22966, null], [22966, 25948, null], [25948, 28979, null], [28979, 32459, null], [32459, 35752, null], [35752, 38720, null], [38720, 42132, null], [42132, 45441, null], [45441, 47362, null], [47362, 47960, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47960, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47960, null]], "pdf_page_numbers": [[0, 1042, 1], [1042, 3506, 2], [3506, 6351, 3], [6351, 9442, 4], [9442, 12839, 5], [12839, 15490, 6], [15490, 16911, 7], [16911, 19997, 8], [19997, 22966, 9], [22966, 25948, 10], [25948, 28979, 11], [28979, 32459, 12], [32459, 35752, 13], [35752, 38720, 14], [38720, 42132, 15], [42132, 45441, 16], [45441, 47362, 17], [47362, 47960, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47960, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
a2c382a4ad70506d3c1ff134f3b67159a7b4e60c
[REMOVED]
{"Source-Url": "http://www.researchgate.net/profile/Krzysztof_Krawiec2/publication/235352207_Medial_Crossovers_for_Genetic_Programming/links/00b495277908c3a62f000000.pdf", "len_cl100k_base": 9233, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39425, "total-output-tokens": 10991, "length": "2e13", "weborganizer": {"__label__adult": 0.0004839897155761719, "__label__art_design": 0.0003914833068847656, "__label__crime_law": 0.0005636215209960938, "__label__education_jobs": 0.0010747909545898438, "__label__entertainment": 0.00010228157043457033, "__label__fashion_beauty": 0.0002257823944091797, "__label__finance_business": 0.0002627372741699219, "__label__food_dining": 0.0005087852478027344, "__label__games": 0.0007781982421875, "__label__hardware": 0.0013990402221679688, "__label__health": 0.0009765625, "__label__history": 0.00044345855712890625, "__label__home_hobbies": 0.0001996755599975586, "__label__industrial": 0.0008978843688964844, "__label__literature": 0.0005230903625488281, "__label__politics": 0.0004584789276123047, "__label__religion": 0.0008087158203125, "__label__science_tech": 0.1376953125, "__label__social_life": 0.0001499652862548828, "__label__software": 0.0068817138671875, "__label__software_dev": 0.84375, "__label__sports_fitness": 0.0004382133483886719, "__label__transportation": 0.000946044921875, "__label__travel": 0.00024020671844482425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38600, 0.05917]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38600, 0.51411]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38600, 0.8862]], "google_gemma-3-12b-it_contains_pii": [[0, 2654, false], [2654, 5826, null], [5826, 8895, null], [8895, 11947, null], [11947, 15072, null], [15072, 18445, null], [18445, 21435, null], [21435, 24769, null], [24769, 27746, null], [27746, 30532, null], [30532, 33704, null], [33704, 36648, null], [36648, 38600, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2654, true], [2654, 5826, null], [5826, 8895, null], [8895, 11947, null], [11947, 15072, null], [15072, 18445, null], [18445, 21435, null], [21435, 24769, null], [24769, 27746, null], [27746, 30532, null], [30532, 33704, null], [33704, 36648, null], [36648, 38600, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38600, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38600, null]], "pdf_page_numbers": [[0, 2654, 1], [2654, 5826, 2], [5826, 8895, 3], [8895, 11947, 4], [11947, 15072, 5], [15072, 18445, 6], [18445, 21435, 7], [21435, 24769, 8], [24769, 27746, 9], [27746, 30532, 10], [30532, 33704, 11], [33704, 36648, 12], [36648, 38600, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38600, 0.16754]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
140f8a234fc702c52d7020a6e99722d463da6bfc
Adaptive and Hybrid Algorithms: classification and illustration on triangular system solving Van-Dat Cung, Vincent Danjean, Jean-Guillaume Dumas, Thierry Gautier, Guillaume Huard, Bruno Raffin, Christophe Rapine, Jean-Louis Roch, Denis Trystram To cite this version: HAL Id: hal-00318540 https://hal.archives-ouvertes.fr/hal-00318540 Submitted on 4 Sep 2008 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Adaptive and Hybrid Algorithms: classification and illustration on triangular system solving Van Dat Cung Vincent Danjean Jean-Guillaume Dumas Thierry Gautier Guillaume Huard Bruno Raffin Christophe Rapine Jean-Louis Roch Denis Trystram Abstract We propose in this article a classification of the different notions of hybridization and a generic framework for the automatic hybridization of algorithms. Then, we detail the results of this generic framework on the example of the parallel solution of multiple linear systems. Introduction Large-scale applications, software systems and applications are getting increasingly complex. To deal with this complexity, those systems must manage themselves in accordance with high-level guidance from humans. Adaptive and hybrid algorithms enable this self-management of resources and structured inputs. In this paper, we propose a classification of the different notions of hybridization and a generic framework for the automatic hybridization of algorithms. We illustrate our framework in the context of combinatorial optimizations and linear algebra, in a sequential environment as well as in an heterogeneous parallel one. In the sequel, we focus on hybrid algorithms with provable performance. Performance is measured in terms of sequential time, parallel time or precision. After surveying, classifying and illustrating the different notions of hybrid algorithms in section 1, we propose a generic recursive framework enabling the automation of the process of hybridization in section 2. We then detail the process and the result of our generic hybridization on the example of solving linear systems in section 3. 1 A survey and classification of hybrid algorithms 1.1 Definitions and classification In this section we propose a definition of hybrid algorithm, based on the notion of strategic choices among several algorithms. We then refine this definition to propose a classification --- *This work is supported by the INRIA-IMAG project AHA: Adaptive and Hybrid Algorithms. of hybrid algorithms according to the number of choices performed (simple, baroque) and the amount of inputs/architecture information used (tuned, adaptive, introspective, oblivious, engineered). Figure 1 summarizes this classification. **Definition 1.1 (Hybrid).** An algorithm is hybrid (or a poly-algorithm) when there is a choice at a high level between at least two distinct algorithms, each of which could solve the same problem. The choice is strategic, not tactical. It is motivated by an increase of the performance of the execution, depending on both input/output data and computing resources. The following criterion on the number of choices to decide is used to make a first distinction among hybrid algorithms. **Definition 1.2 (Simple versus Baroque).** A hybrid algorithm may be - **simple**: $O(1)$ choices are performed whatever the input (e.g. its size) is. Notice that, while only a constant number of choices are done, each choice can be used several times (an unbounded number of times) during the execution. Parallel divide&conquer algorithms illustrate this point in next section. - **baroque**: the number of choices is not bounded: it depends on the input (e.g. its size). While choices in a simple hybrid algorithm may be defined statically before any execution, some choices in baroque hybrid algorithms are necessarily computed at run time. The choices may be performed based on machine parameters. But there exist efficient algorithms that do not base their choices on such parameters. For instance, cache-oblivious algorithms have been successfully explored in the context of regular [11] and irregular [1] problems, on sequential and parallel machine models [2]. They do not use any information about memory access times, or cache-line or disk-block sizes. This motivates a second distinction based on the information used. **Definition 1.3 (oblivious, tuned, engineered, adaptive, introspective).** Considering the way choices are computed, we distinguish the following class of hybrid algorithms: - A hybrid algorithm is oblivious, if its control flow depends neither on the particular values of the inputs nor on static properties of the resources. - A hybrid algorithm is tuned, if a strategic decision is made based on static resources such as memory specific parameters or heterogeneous features of processors in a distributed computation. A tuned algorithm is engineered if a strategic choice is inserted based on a mix of the analysis and knowledge of the target machine and input patterns. A hybrid algorithm is self-tuned if the choices are automatically computed by an algorithm. - A hybrid algorithm is adaptive if it avoids any machine or memory-specific parameterization. Strategic decisions are made based on resource availability or input data. properties, both discovered at run-time (such as idle processors). An adaptive algorithm is *introspective* if a strategic decision is made based on assessment of the algorithm performance on the given input up to the decision point. In [12], Ganek and Corbi defined *autonomic computing* to be the conjunction of self-configuring, self-healing, self-optimizing and self-protecting systems. Self-configuring relates to what we call adaptivity, self-optimizing to self-tuning. Autonomic computing thus adds fault-tolerance (self-healing) and security (self-protecting) to our notion of *hybrid* computing. Above definitions deliberately focus on a general characterization of adaptation in the algorithm. They consider neither implementation nor performance. To implement an *adaptive* algorithm, we may distinguish two approaches. Either the choices are included in the algorithm itself, or they may be inserted dynamically to change the software itself, or its execution environment. An algorithm is *evolutive* (or interactive) if a strategic choice is inserted dynamically. Reflexive languages enable to change the behavior of a program dynamically [19]. Polymorphism or template specialization is a way to optimize an algorithm. We view polymorphism and template mechanisms as a possible way to implement the different kinds of *hybrid* algorithm we propose. ### 1.2 Illustrations on examples We illustrate the previous criteria on some examples of *hybrid* algorithms or libraries. **BLAS libraries.** ATLAS [24] and GOTO [14] are libraries that implement basic linear algebra subroutines. Computation on matrices are performed by blocks. The block size and the sequential algorithm used for a basic block are chosen based on performance measures on the target architecture. The decisions are computed automatically at installation with ATLAS while they are provided only for some architectures with GOTO. ATLAS implements *self-tuned simple hybrid* algorithms and GOTO *simple engineered* ones. **Granularity in sequential divide&conquer algorithms.** Halting recursion in divide&conquer to complete small size computations with another more efficient algorithm is a classical technique to improve the practical performance of sequential algorithms. The resulting algorithm is a simple hybrid one. Often the recursion threshold is based on resource properties. This is the case for the GMP integer multiplication algorithm that successively couples four algorithms: Schönhage-Strassen \( \Theta(n \log n \log \log n) \), Toom-Cook 3-way \( \Theta(n^{1.465}) \), Karatsuba \( \Theta(n^{\log_2 3}) \) and standard \( \Theta(n^2) \) algorithms. **Linpack benchmark for parallel LU factorization.** Linpack [6] is one milestone in parallel machines’ power evaluation. It is the reference benchmark for the top-500 ranking of the most powerful machines. The computation consists in a LU factorization, with raw partial pivoting in the ”right-looking” variant [6], the processors assumed being identical. To limit the volume of communication to \( O(n^2 \sqrt{p}) \), a cyclic bidimensional block partitioning is used on a virtual grid of \( q^2 = p \) processors. The block \((i, j)\) is mapped to the processor of index \( P(i, j) = (i \mod q)q + (j \mod q) \) and operations that modify block \((i, j)\) are scheduled on processor \( P(i, j) \). Linpack has a standard implementation on top of MPI with various parameters that may be tuned: broadcast algorithm (for pivot broadcasting on a line of processors), level of recursion in the ”right-looking” decomposition algorithm and block size. The parallel architecture may also be tuned to improve the performance [22]. Linpack is an engineered tuned simple hybrid algorithm. **FFTW.** FFTW [10] is a library that implements discrete Fourier transform of a vector of size \( n \). We summarize here the basic principle of FFTW. For all \( 2 \leq q \leq \sqrt{n} \), the FFT Cooley-Tuckey recursive algorithm reduces to \( q \) FFT subcomputations of size \( \left\lceil \frac{n}{q} \right\rceil \) and \( \left\lceil \frac{n}{q} \right\rceil \) FFT subcomputations of size \( q \), plus \( O(n) \) additional operations. Hybridization in FFTW occurs at two levels: - at installation on the architecture. For a given \( n_0 \) the best unrolled FFT algorithm for all \( n \leq n_0 \) is chosen among a set of algorithms by experimental performance measurements. This hybrid algorithm is simple tuned. - at execution. For a given size \( n \) of the input vectors and for all \( n_0 \leq m \leq n \), a planner precomputes the splitting factor \( q_m \) that will be further used for any recursive FFT with size \( m \). This precomputation is performed by dynamic programming: it optimizes each sub-problem of size \( m \) locally, independently of the larger context where it is invoked. The planner adds a precomputation overhead. This overhead may be amortized by using the same plan for computing several FFTs of the same size \( n \). FFTW3 also proposes heuristic algorithms to compute plans with smaller overhead than dynamic programming. The number of choices in FFTW depends on the size \( n \) of the inputs. FFTW is a self-tuned baroque hybrid algorithm. **Granularity in parallel divide&conquer algorithms.** Parallel algorithms are often based on a mix of two algorithms: a sequential one that minimizes the number of operations $T_1$ and a parallel one that minimizes the parallel time $T_\infty$. The cascading divide & conquer technique [17] is used to construct a hybrid algorithm with parallel time $O(T_\infty)$ while performing $O(T_1)$ operations. For instance, iterated product of $n$ elements can be performed in parallel time $T_\infty = 2 \log n$ with $\frac{n}{\log n}$ processors by choosing a grain size of $\log n$. Even if this choice depends on the input size $n$, it can be computed only once at the beginning of the execution. The algorithm is a simple hybrid one. Other examples of such parallel simple hybrid algorithms are: computation of the maximum of $n$ elements in asymptotic optimal time $\Theta(\log \log n)$ on a CRCW PRAM with $\frac{n}{\log n}$ processors [17]; solving of a triangular linear system in parallel time $O(\sqrt{n \log n})$ with $\Theta(n^2)$ operations [20]. In section 3 we detail an extended baroque hybridization for this problem, enabling a higher performance on a generic architecture. Parallel adaptive algorithms by work-stealing - Kaapi. Cilk [18], Athapascan/Kaapi [16] and Satin [23] are parallel programming interfaces that support recursive parallelism and implement a work-stealing scheduling based on the work first principle. A program explicits parallelism and synchronization. While Cilk and Satin are restricted to serie-parallel tasks DAGs, Kaapi accepts any kind of dataflow dependencies. However, all are based on a sequential semantics: both depth first sequential search (DFS) and width (or breadth) first parallel search (BFS) are correct executions of the program. Then the program implements a parallel algorithm (BFS) that can also be considered as a sequential one (DFS). The (recursive) choices between both are performed by the scheduler. To save memory, depth-first execution (DFS) is always locally preferred. When a processor becomes idle, it steals the oldest ready task on a non-idle processor. This stealing operation then corresponds to a breadth first execution (BFS). Since each parallel task creation can be performed either by a sequential call (DFS algorithm) or by creation of a new thread (BFS algorithm) depending on resource idleness, any parallel program with non-fixed degree of parallelism is a hybrid baroque algorithm. Because the choice does not depend on the input size but only on resource idleness, the algorithm is adaptive. In section 2.3 we detail a more general coupling for this problem. 2 Generic algorithmic schemes for hybrid algorithms In this section we detail a generic scheme to control the time overhead due to choices in a hybryd algorithm, providing a proven upperbound for sequential and parallel baroque hybridization. 2.1 Basic representation Let $f$ be a problem with input set $I$ and output set $O$. For the computation of $f$, a hybrid algorithm is based on the composition of distinct algorithms $(f_i)_{i=1,...,k}$, each solving the problem $f$. Since an algorithm is finite, the number $k \geq 2$ of algorithms is finite; however, each of those algorithms may use additional parameters, based on the inputs, outputs or machine parameters (e.g. number of processors). We assume that each of those algorithms is written in a recursive way: to solve a given instance of $f$, algorithm $f_i$ reduces it to subcomputations instances of $f$ of smaller sizes. Hybridization then consists in choosing for each of those subcomputations the suited algorithm $f_j$ to be used (fig. 2). This choice can be implemented in various ways. For instance, $f$ may be implemented as a pure virtual function, each of the $f_i$ being an inherited specialization. Scheme for decreasing overhead due to choices. For baroque algorithms the choices between the different $f_i$’s are performed at runtime. Therefore an important problem is related to reducing the overhead related to the computation of each choice. In the next section, we describe an original alternative scheme to decrease the overhead induced by the choices for each call to $f$ in the previous algorithm. Generalization to various computations [5] (namely Branch&X computations and linear algebra) is based on the use of an exception mechanism. For a given subcomputation, a default given computation $f_j$ is favored. However, this choice may be changed under some exceptional circumstances depending on values or machine parameters. Then, if the total number of such exceptions is small with respect to the total number of subcomputations, the overhead due to choices become negligible. We detail such a scheme in next section. 2.2 Baroque coupling of sequential and parallel algorithms We presented the coupling of a sequential algorithm $f_{seq}$ and a parallel one $f_{par}$ that solve the same problem $f$. For the sake of simplicity, we assume that the sequential algorithm performs a first part of the sequential computation (called ExtractSeq) and then performs a recursive terminal call to $f$ to complete the computation. Besides, we assume that the sequential algorithm is such that at any time of its execution, the sequence of operations that completes the algorithm $f_{seq}$ can be performed by another parallel recursive (fine grain) algorithm $f_{par}$. The operation that consists in extracting the last part of the sequential computation in progress to perform it in parallel with $f_{par}$ is called ExtractPar. After completion of $f_{par}$, the final result is computed by merging both the result of the first part computed by $f_{seq}$ (not affected by ExtractPar) and the result of the ExtractPar part computed by $f_{par}$. More precisely, given a sequential algorithm $f_{\text{seq}}$ (resp. parallel $f_{\text{par}}$), the result $r$ of its evaluation on an input $x$ is denoted $f_{\text{seq}}(x)$ (resp. $f_{\text{par}}(x)$). We assume that $x$ has a list structure with a concatenation operator $\sharp$ and that there exists an operator $\oplus$ (not necessarily associative) for merging the results. At any time of evaluation of $f_{\text{seq}}(x)$, $x$ can be split into $x_1 \sharp x_2$, due to either an $\text{ExtractSeq}$ or an $\text{ExtractPar}$ operation on $x$. The result computed by the parallel algorithm is then $f_{\text{par}}(x) = f(x_1) \oplus f(x_2)$. We assume that both results $f_{\text{seq}}(x)$ and $f_{\text{par}}(x)$ are equivalents with respect to a given measure. In the restricted framework of list homomorphism [3], this hypothesis can be written as $f(x \sharp y) = f(x) \oplus f(y)$. However, it is possible to provide parallel algorithms for problems that are not list homomorphisms [4] at the price of an increase in the number of operations. To decrease overhead related to choices for $f$ between $f_{\text{seq}}$ and $f_{\text{par}}$, $f_{\text{seq}}$ is the default choice used. Based on a workstealing scheduling, $f_{\text{par}}$ is only chosen when a processor becomes idle, which leads to an $\text{ExtractPar}$ operation. This exception mechanism can be implemented by maintaining during any execution of $f_{\text{seq}}(x)$ a lower priority process ready to perform an $\text{ExtractPar}$ operation on $x$ resulting in an input $x_2$ for $f_{\text{par}}$ only when a processor becomes idle. Then the overhead due to choices is only related to the number of $\text{ExtractPar}$ operations actually performed. To analyze this number, we adopt the simplified model of Cilk-5 [18] also valid for Kaapi [16]. It relies on Graham’s bound (see Equation 2 in [18]). Let $T_{1}^{(\text{seq})}$ (resp. $T_{1}^{(\text{par})}$) be the execution time on a sequential processor (i.e. work) of $f_{\text{seq}}$ (resp. $f_{\text{par}}$), and let $T_{\infty}^{(\text{par})}$ be the execution time of $f_{\text{par}}$ on an unbounded number of identical processors. **Theorem 2.1.** When the hybrid program is executed on a machine with $m$ identical processors, the number of choices that result in a choice $f_{\text{par}}$ for $f$ instead of $f_{\text{seq}}$ is bounded by $(m - 1).T_{\infty}^{(\text{par})}$ **Proof.** On an infinite number of processors, all the computation is performed by $f_{\text{par}}$; the parallel time of the hybrid algorithm is then $T_{\infty}^{(\text{par})}$. Then the number of steal requests is bounded by $T_{\infty}^{(\text{par})}$ on each processor (Graham’s bound), except for the one running $f_{\text{seq}}$. The latter only executes the sequential algorithm, but is subject to ExtractPar, due to steal requests from the others. This is true for any execution of such hybrid baroque algorithm. □ The consequence of this theorem is that for a fine grain parallel algorithm that satisfies $T_{\infty}^{(\text{par})} \ll T_{1}^{(\text{seq})}$, even if the hybrid algorithm is baroque (non constant number of choices), the overhead in time due to choices in the hybrid algorithm is negligible when compared to the overall work. **Remark.** The overhead due to the default call to ExtractSeq can also be reduced. Ideally, ExtractSeq should extract a data whose computations by $f_{\text{par}}$ would require a time at least $T_{\infty}^{(\text{par})}$, which is the critical time for $f_{\text{par}}$. 7 2.3 Application to the coupling of DFS/BFS for combinatorial optimization The performance and overhead of the previous scheme were experimentally determined for the Quadratic Assignment Problem (for instance NUGENT 22\(^1\)). This application implements a Branch&Bound algorithm: it recursively generates nodes in the search tree, which has 221938 nodes and a maximal depth of 22. Locally, each processor implements by default a sequential algorithm \(f_{\text{seq}}\) that implements a depth first search (DFS) in the tree. It enables to save memory and also to optimize branching in the tree without copy (sons of a node \(n\) are sequentially created from the value of \(n\) with backtracking). To minimize critical time, the alternative \(f_{\text{par}}\) parallel algorithm implements a breadth first search (BFS) algorithm. When a processor becomes idle, it picks the oldest node of a randomly chosen non-idle processor (\(\text{ExtractPar}\)). This parallel algorithm introduces an overhead due to node copy. The experiments were conducted on the iCluster\(^2\)\(^2\), a cluster of 104 nodes interconnected by a 100Mbps Ethernet network. Each node features two Itanium-2 processors (900 MHz) and 3 GB of local memory. The algorithm was parallelized using Kaapi. The degree of parallelism (threshold) can be adjusted: after a given depth, the subtree of a node is computed locally by \(f_{\text{seq}}\). This threshold defined the minimum granularity and should be chosen such that the time of the local computation by \(f_{\text{seq}}\) is comparable to the time overhead of parallelism. Figure 3: Impact of granularity Figure 4: Execution time (sequential time: 34,695s) The sequential execution time (C++ code without Kaapi) was 34,695 seconds. With Kaapi, at fine grain (threshold \(\geq 10\)), the execution on a single processor generated 225,195 tasks and ran in 34,845 seconds. The impact of the degree of parallelism can be seen in Figure 3 that gives the number of parallel tasks generated for different thresholds. The degree of parallelism increases drastically for threshold 5 and approaches its maximum at threshold 10. Figure 4 shows that the application is scalable with a fine threshold (8, i.e. 209406 nodes). Since the critical time \(T_\infty\) is small, there are few successful steals and the overhead of hybridation between \(f_{\text{seq}}\) and \(f_{\text{par}}\) has small impact on efficiency. \(^1\) http://www.opt.math.tu-graz.ac.at/qaplib \(^2\) http://www.inrialpes.fr/sed/i-cluster2 Notice that Kaapi also includes a (hybrid) checkpoint/restart mechanism [16] to support the resilience and the addition of processors. This feature makes the application itself oblivious to dynamic platforms. The overhead of this checkpoint mechanism appears negligible for this application (Figure 4). In the next section, we detail various forms of hybridation on a single example, the solving of a triangular system. 3 Hybridization for triangular system solving 3.1 Triangular system solving with matrix right-hand side Exact matrix multiplication, together with matrix factorizations, over finite fields can now be performed at the speed of the highly optimized numerical BLAS routines. This has been established by the FFLAS and FFPACK libraries [8, 9]. In this section we discuss the implementation of exact solvers for triangular systems with matrix right-hand side (or equivalently left-hand side). This is also the simultaneous resolution of \( n \) triangular systems. Without loss of generality for the triangularization, we here consider only the case where the row dimension, \( m \), of the triangular system is less than or equal to the column dimension, \( n \). The resolution of such systems is e.g. the main operation in block Gaussian elimination. For solving triangular systems over finite fields, the block algorithm reduces to matrix multiplication and achieves the best known algebraic complexity. Therefore, from now on we will denote by \( \omega \) the exponent of square matrix multiplication (e.g. from 3 for classical, to 2.375477 for Coppersmith-Winograd). Moreover, we can bound the arithmetical cost of a \( m \times k \) by \( k \times n \) rectangular matrix multiplication (denoted by \( R(m, k, n) \)) as follows: \[ R(m, k, n) \leq C \omega \min(m, k, n)^{\omega - 2} \max(mk, mn, kn) \] [15]. In the following subsections, we present the block recursive algorithm and two optimized implementation variants. 3.2 Scheme of the block recursive algorithm The classical idea is to use the divide and conquer approach. Here, we consider the upper left triangular case without loss of generality, since any combination of upper/lower and left/right triangular cases are similar: if \( U \) is upper triangular, \( L \) is lower triangular and \( B \) is rectangular, we call \( \text{ULeft-Trsm} \) the resolution of \( UX = B \). Suppose that we split the matrices into blocks and use the divide and conquer approach as follows: \[ \begin{bmatrix} A \\ A_1 & A_2 \\ A_3 \end{bmatrix} \begin{bmatrix} X \\ X_1 \\ X_2 \end{bmatrix} = \begin{bmatrix} B \\ B_1 \\ B_2 \end{bmatrix} \] 1. \( X_2 := \text{ULeft-Trsm}(A_3, B_2) \); 2. \( B_1 := B_1 - A_2X_2 \); 3. \( X_1 := \text{ULeft-Trsm}(A_1, B_1) \); With $m = n$ and classical matrix multiplication, the arithmetic cost of this algorithm is $TRSM(m) = m^3$ as shown e.g. in [9, Lemma 3.1]. We also now give the cost of the triangular matrix multiplication, TRMM, and of the triangular inversion, INVT, as we will need them in the following sections. To perform the multiplication of a triangular matrix by a dense matrix via a block decomposition, one requires four recursive calls and two dense matrix-matrix multiplications. The cost is thus $TRMM(m) = 4TRMM(m/2) + 2MM(m/2)$. The latter is $TRMM(m) = m^3$ with classical matrix multiplication. Now the inverse of a triangular matrix requires two recursive calls to invert $A_1$ and $A_3$. Then, the square block of the inverse is $-A_1^{-1}A_2A_3^{-1}$. The cost is thus $INVT(m) = 2INVT(m/2) + 2TRMM(m/2)$. The latter is $INVT(m) = \frac{1}{3}m^3$ with classical matrix multiplication. 3.3 Two distinct hybrid degenerations 3.3.1 Degenerating to the BLAS “dtrsm” Matrix multiplication speed over finite fields was improved in [8, 21] by the use of the numerical BLAS3 library: matrices were converted to floating point representations (where the linear algebra routines are fast) and converted back to a finite field representation afterwards. The computations remained exact as long as no overflow occurred. An implementation of ULeft-Trsm can use the same techniques. Indeed, as soon as no overflow occurs one can replace the recursive call to ULeft-Trsm by the numerical BLAS dtrsm routine. But one can remark that approximate divisions can occur. So we need to ensure both that only exact divisions are performed and that no overflow appears. However when the system is unitary (only 1’s on the main diagonal) the division are of course exact and will even never be performed. Our idea is then to transform the initial system so that all the recursive calls to ULeft-Trsm are unitary. For a triangular system $AX = B$, it suffices to factor first the matrix $A$ into $A = UD$, where $U, D$ are respectively an upper unit triangular matrix and a diagonal matrix. Next the unitary system $UY = B$ is solved by any ULeft-Trsm (even a numerical one), without any division. The initial solution is then recovered over the finite field via $X = D^{-1}Y$. This normalization leads to an additional cost of $O(mn)$ arithmetic operations (see [9] for more details). We now care for the coefficient growth. The use of the BLAS routine trsm is the resolution of the triangular system over the integers (stored as double for dtrsm). The restriction is the coefficient growth in the solution. Indeed, the $k^{th}$ value in the solution vector is a linear combination of the $(n - k)$ already computed next values. This implies a linear growth in the coefficient size of the solution, with respect to the system dimension: for a given $p$, the dimension $n$ of the system must satisfy $2^{\lfloor \log_p (n - 1) \rfloor} < 2^{ma}$ where $ma$ is the size of the mantissa [9]. Then the resolution over the integers using the BLAS trsm routine is exact. For instance, with a 53 bits mantissa, this gives quite small matrices, namely at most $55 \times 55$ for $p = 2$, at most $4 \times 4$ for $p \leq 9739$, and at most $p = 94906249$ for $2 \times 2$ matrices. \footnote{www.netlib.org/blas} Nevertheless, this technique is speed-worthy in many cases. In the following, we will denote by $S_{BLAS}(p)$ the maximal matrix size for which the BLAS resolution is exact. Also, $\text{BLASTrsm}$ is the recursive block algorithm, switching to the BLAS resolution as soon as the splitting gives a block size lower than $S_{BLAS}(p)$. 3.3.2 Degenerating to delayed modulus In the previous section we noticed that BLAS routines within $\text{Trsm}$ are used only for small systems. An alternative is to change the cascade: instead of calling the BLAS, one could switch to the classical iterative algorithm; Let $A \in \mathbb{Z}/p\mathbb{Z}^{m \times m}$ and $B, X \in \mathbb{Z}/p\mathbb{Z}^{m \times n}$ such that $AX = B$, then $\forall i, X_{i,\ast} = \frac{1}{A_{i,i}}(B_{i,\ast} - A_{i,[i+1..m]}X_{[i+1..m],\ast})$ The idea is that the iterative algorithm computes only one row of the whole solution at a time. Therefore its threshold $t$ is greater than the one of the BLAS routine, namely it requires only $t(p-1)^2 < 2^{m_a}$ for a $0..p-1$ unsigned representation, or $t(p-1)^2 < 2^{m_a+1}$ for a $\frac{p-1}{2}, \frac{p+1}{2}$ signed one. Now we focus on the dot product operation, base for matrix-vector product. According to [7], where different implementations of a dot product are proposed and compared on different architecture (Zech log, Montgomery, float, ...), the best implementation is a combination of a conversion to floating point representation with delayed modulus (for big prime and vector size) and an overflow detection trick (for smaller prime and vector size). $\text{DelayTrsm}_t$ is the recursive block algorithm, switching to the delayed iterative resolution as soon as the splitting gives a block size lower than $t$ (of course, $t$ must satisfy $t \leq S_{BLAS}(p)$). 3.4 Tuning the “Trsm” implementation 3.4.1 Experimental tuning As shown in section 3.2 the block recursive algorithm $\text{Trsm}$ is based on matrix multiplications. This allows us to use the fast matrix multiplication routine of the FFLAS package [8]. This is an exact wrapping of the ATLAS library\(^4\) used as a kernel to implement the $\text{Trsm}$ variants. The following table results from experimental results of [9] and expresses which of the two preceding variants is better. $\text{Mod<double>}$ is a field representation from [7] where the elements are stored as floating points to avoid one of the conversions. $G$-$Zpz$ is a field representation from [13] where the elements are stored as small integers. <table> <thead> <tr> <th>$n$</th> <th>400</th> <th>700</th> <th>1000</th> <th>2000</th> <th>5000</th> </tr> </thead> <tbody> <tr> <td>$\text{Mod&lt;double&gt;(5)}$</td> <td>$\text{BLASTrsm}$</td> <td>$\text{BLASTrsm}$</td> <td>$\text{BLASTrsm}$</td> <td>$\text{BLASTrsm}$</td> <td>$\text{BLASTrsm}$</td> </tr> <tr> <td>$\text{Mod&lt;double&gt;(32749)}$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{BLASTrsm}$</td> <td>$\text{BLASTrsm}$</td> </tr> <tr> <td>$G$-$Zpz(5)$</td> <td>$\text{DelayTrsm}_{150}$</td> <td>$\text{DelayTrsm}_{150}$</td> <td>$\text{DelayTrsm}_{150}$</td> <td>$\text{BLASTrsm}$</td> <td>$\text{BLASTrsm}$</td> </tr> <tr> <td>$G$-$Zpz(32749)$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{DelayTrsm}_{50}$</td> <td>$\text{DelayTrsm}_{50}$</td> </tr> </tbody> </table> Table 1: Best variant for $\text{Trsm}$ on a P4, 2.4GHz In the following, we will denote by $S_{Del}(n, p)$ the threshold $t$ for which $\text{DelayTrsm}_t$ is the most efficient routine for matrices of size $n$. $S_{Del}(n, p)$ is set to 0 if e.g. the $\text{BLASTrsm}$ routine is better. The experiment shows that $S_{Del}(n,p)$ can be bigger or smaller than $S_{BLAS}(p)$ depending on the matrix size, the prime and the underlying arithmetic implementation. ### 3.4.2 Hybrid tuned algorithm The experimental results of previous section, thus provide us with an hybrid algorithm where we can tune some static threshold in order to benefit from all the variants. Moreover, some choices have to be made for the splitting size $k$ in order to reach the optimal complexity $T_{opt}$: $$T_{opt}(m) = \text{Min}_k \{ T_{opt}(k) + T_{opt}(m-k) + R(m-k,k,n) \}.$$ **Algorithm** $\text{ULeft-Trsm}(A, B)$ **Input:** $A \in \mathbb{Z}_{/p\mathbb{Z}}^{m \times m}, \quad B \in \mathbb{Z}_{/p\mathbb{Z}}^{m \times n}$. **Output:** $X \in \mathbb{Z}_{/p\mathbb{Z}}^{m \times n}$ such that $AX = B$. \begin{verbatim} if $m \leq S_{Del}(m,p)$ then // Hybrid modulus degeneration 3.3.2 $X := \text{DelayTrsm}(A, B)$; else if $m \leq S_{BLAS}(p)$ then // Hybrid BLAS degeneration 3.3.1 $X := BLASTrsm(A, B)$; else // Hybrid block recursive 3.2 $k := \text{Choice}(1..\lfloor \frac{m}{2} \rfloor)$; Split matrices into $k$ and $m-k$ blocks $X_2 := \text{ULeft-Trsm}(A_3, B_2)$; $B_1 := B_1 - A_2 X_2$; $X_1 := \text{ULeft-Trsm}(A_1, B_1)$; return $X$; \end{verbatim} ### 3.5 Baroque hybrid parallel Trsm The previous algorithm takes benefit of parallelism at the level of Blas matrix product operations. However, using the scheme proposed in §2.2, it is possible to obtain an algorithm with more parallelism in order to decrease the critical time when more processors are available. Furthermore, this also improves the performance of the distributed work-stealing scheduler. Indeed, while $X_2$ and $B_1$ are being computed, additional idle processors may proceed to the parallel computation of $A_1^{-1}$. Indeed, $X_1$ may be computed in two different ways: i. $X_1 = TRSM(A_1, B_1)$: the arithmetic cost is $T_1 = k^3$ and $T_\infty = k$; ii. $X_1 = TRMM(A_1^{-1}, B_1)$: the arithmetic cost is the same $T_1 = k^3$ but $T_\infty = \log k$. Indeed the version (ii) with TRMM is more efficient on a parallel point of view: the two recursive calls and the matrix multiplication in (ii) (TRMM) are independent. They can be performed on distinct processors requiring less communications than TRSM. Since precomputation of $A_1^{-1}$ increases the whole arithmetic cost, it is only performed if there are extra unused processors during the computation of $X_2$ and $B_1$; the latter has therefore higher priority. The problem is to decide the size $k$ of the matrix $A_1$ that will be inverted in parallel. With the optimal value of $k$, the computation of $A_1^{-1}$ completes simultaneously with that of $X_2$ and $B_1$. This optimal value of $k$ depends on many factors: number of processors, architecture of processors, subroutines, data. The algorithm presented in the next paragraph uses the oblivious adaptive scheme described in 2.2. to estimate this value at runtime using the hybrid coupling of a “sequential” algorithm $f_s$ with a parallel one $f_p$. ### 3.5.1 Parallel adaptive TRSM We assume that the parallel hybrid TRSM is spawned by a high priority process. Then the parallel hybrid TRSM consists in computing concurrently in parallel (Figure 5): - “sequential” computation ($f_s$) at high priority: bottom-up computation of $X = TRSM(A, B)$ till reaching $k$, implemented by BUT algorithm (Bottom-Up TRSM - §A.1); all processes that perform parallel BLAS operations in BUT are executed at high priority; - parallel computation ($f_p$) at low priority: parallel top-down inversion of $A$ till reaching $k$, implemented by TDTI algorithm (Top Down Triangular Inversion - §A.2); all processes that participates in parallel TDTI are executed at low priority. **Algorithm HybridParallelTrsm($A;B$)** **Input:** $A \in \mathbb{Z}_{/p\mathbb{Z}}^{m \times m}$, $B \in \mathbb{Z}_{/p\mathbb{Z}}^{m \times n}$. **Output:** $X \in \mathbb{Z}_{/p\mathbb{Z}}^{m \times n}$ such that $AX = B$. 1. $k_{TDTI} := 0$ ; $k_{BUT} := m$; 2. Parallel { - At high priority: $(X_2, B_1') := BUT(A, B)$; - At low priority: $M := TDTI(\emptyset, A)$; } 3. Here, BUT has stopped TDTI and $k_{BUT} \leq k_{TDTI}$. 4. Now, let $A_1^{-1} = M_{1..k_{BUT},1..k_{BUT}}$; 5. $X_1 := A_1^{-1}.B_1'$; At each step, the sequential bottom-up BUT algorithm (resp. the parallel top-down TDTI) performs an ExtractSeq (resp. ExtractPar) operation on a block of size $k_B$ (resp. $k_I$) (Figure 5 and detailed subroutines BUT and TDTI in appendices). Note that the values of $k_B$ and $k_I$ may vary during the execution depending on the current state. 3.5.2 Definition of parameters $k_I$ and $k_B$ Parameters $k_B$ (resp. $k_I$) corresponds to the $\text{ExtractSeq}$ (resp. $\text{ExtractPar}$) operations presented in §2.2. The choice of their values is performed at each recursive step, depending on resources availability. This section analyzes this choice in the case where only one system is to be solved, i.e. $n = 1$. Let $r = k_{\text{BUT}} - k_{\text{TDTI}}$. - On the one hand, to fully exploit parallelism, $k_B$ should not be larger than the critical time $T_\infty$ of TDTI, i.e. $k_B = \log^2 r$. - On the other hand, in order to keep an $O(n^2)$ number of operations if no more processors become idle, the number of operations $O(k_I^3)$ required by TDTI should be balanced by the cost of the update, i.e. $k_I r$, which leads to $k_I = \sqrt{r}$. With those choices of $k_I$ and $k_B$, and assuming that there are enough processors, the number of choices for $k_I$ (and so $k_B$) will then be $O(\sqrt{r})$; the cost of the resulting hybrid algorithm becomes $T_1 = O(n^2)$ and $T_\infty = O(\sqrt{n} \log^2(n))$, a complexity similar to the one proposed in [20] with a fine grain parallel algorithm, while this one is coarse grain and dynamically adapts to resource idleness. Notice that if only a few processors are available, the parallel algorithm will be executed at most on one block of size $\sqrt{n}$. The BUT algorithm will behave like the previous hybrid tuned TRSM algorithm. Also, the algorithm is oblivious to the number of resources and their relative performance. 4 Conclusion Designing efficient hybrid algorithms is the key to get most of the available resources and most of the structure of the inputs of numerous applications as we have shown e.g. for linear algebra or for combinatorial optimization Branch&X. In this paper, we have proposed a classification of the distinct forms of hybrid algorithms and a generic framework to express this adaptivity. On a single simple example, namely solving linear systems, we show that several of these “hybridities” can appear. This enables an effective hybridization of the algorithm and a nice way to adapt automatically its behavior, independent of the execution context. This is true in a parallel context where coupling of algorithms is critical to obtain a high performance. The resulting algorithm is quite complex but can be automatically generated in our simple framework. The requirements are just to provide recursive versions of the different methods. In the AHA group\(^5\), such coupling are studied in the context of many examples: vision and adaptive 3D-reconstruction, linear algebra in general, and combinatorial optimization. Acknowledgments. The authors gratefully acknowledge David B. Saunders for useful discussions and suggestions for the classification of hybrid algorithms. \(^5\)aha.imag.fr References A Appendix A.1 Bottom-up TRSM We need to group the last recursive $\textsc{ULeft-Trsm}$ call and the update of $B_1$. The following algorithm thus just computes these last two steps; the first step being performed by the work stealing as shown afterwards. Algorithm BUT Input: $(A_2; A_3; B)$. Output: $X_2, k_{BUT}$. Mutual Exclusion section { if ($k_{TDTI} \geq k_{BUT}$) Return: $k_B := \text{Choice}(1..(k_{BUT} - k_{TDTI}))$. Split remaining columns into $k_{TDTI}..(k_{BUT} - k_B)$ and $(k_{BUT} - k_B)..<k_{BUT}$ $\begin{bmatrix} A_{2,1} & A_{2,2} \\ A_{3,1} & A_{3,2} \\ A_{3,3} & \end{bmatrix} \begin{bmatrix} X_{2,1} \\ X_{2,2} \end{bmatrix} = \begin{bmatrix} B_1 \\ B_{2,1} \end{bmatrix}$ $k_{BUT} := k_{BUT} - k_B$; } $X_{2,2} := \textsc{ULeft-Trsm}(A_{3,3}, B_{2,2})$; $B_1 := B_1 - A_{2,2}X_{2,2}$; $B_{2,1} := B_{2,1} - A_{3,2}X_{2,2}$; $X_{2,1} := \textsc{BUT}(A_{2,1}; A_{3,1}; [B_1; B_{2,1}])$ A.2 Top down triangular inversion of $A_1$ Algorithm TDTI **Input:** $(A_1^{-1}; A_2; A_3)$. **Output:** $A_1^{-1}, k_{TDTI}$. Mutual Exclusion section { if ($k_{TDTI} \geq k_{BUT}$) Return; $k_I := \text{Choice}(1..(k_{BUT} - k_{TDTI}))$. Split remaining columns of $A_2$ and $A_3$ into $k_{TDTI}..(k_{TDTI} + k_I)$ and $(k_{TDTI} + k_I)..k_{BUT}$ } \begin{bmatrix} A_{2,1} & A_{2,2} \\ A_{3,1} & A_{3,2} \\ A_{3,3} \end{bmatrix} } \begin{bmatrix} A_{2,1}^{-1} & A_{2,2}^{-1} \\ A_{3,1}^{-1} & A_{3,2}^{-1} \end{bmatrix} } \begin{bmatrix} A_{2,2} \\ A_{3,2} \\ A_{3,3} \end{bmatrix} } Parallel { $A_{3,1}^{-1} := \text{Inverse}(A_{3,1})$; $T := A_1^{-1}.A_{2,1}$ } \begin{bmatrix} A_{2,1}^{-1} & A_{2,2}^{-1} \\ A_{3,1}^{-1} & A_{3,2}^{-1} \end{bmatrix} } \begin{bmatrix} A_{2,2} \\ A_{3,2} \\ A_{3,3} \end{bmatrix} Now, let $A_1^{-1} = \begin{bmatrix} A_{1,1}^{-1} & A_{1,2}^{-1} \\ A_{1,3}^{-1} \end{bmatrix}$ and $A_2' = \begin{bmatrix} A_{2,2} \\ A_{3,2} \end{bmatrix}$ Mutual Exclusion section { $k_{TDTI} := k_{TDTI} + k_I$; } \begin{bmatrix} A_{2,1}^{-1} & A_{2,2}^{-1} \\ A_{3,1}^{-1} & A_{3,2}^{-1} \end{bmatrix} } \begin{bmatrix} A_{2,2} \\ A_{3,2} \\ A_{3,3} \end{bmatrix} $A_{3,3} := \text{TDTI}(A_1^{-1}; A_2'; A_{3,3})$;
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00318540/file/Hybrid.pdf", "len_cl100k_base": 11149, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 58422, "total-output-tokens": 14014, "length": "2e13", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0005478858947753906, "__label__crime_law": 0.0004777908325195313, "__label__education_jobs": 0.0012216567993164062, "__label__entertainment": 0.00014221668243408203, "__label__fashion_beauty": 0.0002048015594482422, "__label__finance_business": 0.0004246234893798828, "__label__food_dining": 0.00042128562927246094, "__label__games": 0.0009641647338867188, "__label__hardware": 0.001743316650390625, "__label__health": 0.0007171630859375, "__label__history": 0.0005826950073242188, "__label__home_hobbies": 0.00018167495727539065, "__label__industrial": 0.0009436607360839844, "__label__literature": 0.0003812313079833984, "__label__politics": 0.0004804134368896485, "__label__religion": 0.0008039474487304688, "__label__science_tech": 0.29736328125, "__label__social_life": 0.0001316070556640625, "__label__software": 0.0125885009765625, "__label__software_dev": 0.677734375, "__label__sports_fitness": 0.0003964900970458984, "__label__transportation": 0.0008335113525390625, "__label__travel": 0.0002727508544921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47731, 0.05415]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47731, 0.37824]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47731, 0.81104]], "google_gemma-3-12b-it_contains_pii": [[0, 1174, false], [1174, 3217, null], [3217, 6021, null], [6021, 8198, null], [8198, 11427, null], [11427, 14599, null], [14599, 17029, null], [17029, 20583, null], [20583, 23111, null], [23111, 25858, null], [25858, 29151, null], [29151, 32656, null], [32656, 34990, null], [34990, 37338, null], [37338, 40192, null], [40192, 42639, null], [42639, 45483, null], [45483, 46476, null], [46476, 47731, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1174, true], [1174, 3217, null], [3217, 6021, null], [6021, 8198, null], [8198, 11427, null], [11427, 14599, null], [14599, 17029, null], [17029, 20583, null], [20583, 23111, null], [23111, 25858, null], [25858, 29151, null], [29151, 32656, null], [32656, 34990, null], [34990, 37338, null], [37338, 40192, null], [40192, 42639, null], [42639, 45483, null], [45483, 46476, null], [46476, 47731, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47731, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47731, null]], "pdf_page_numbers": [[0, 1174, 1], [1174, 3217, 2], [3217, 6021, 3], [6021, 8198, 4], [8198, 11427, 5], [11427, 14599, 6], [14599, 17029, 7], [17029, 20583, 8], [20583, 23111, 9], [23111, 25858, 10], [25858, 29151, 11], [29151, 32656, 12], [32656, 34990, 13], [34990, 37338, 14], [37338, 40192, 15], [40192, 42639, 16], [42639, 45483, 17], [45483, 46476, 18], [46476, 47731, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47731, 0.02027]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
41d62c5ba2c0206509b743e3322f6396323e5d70
[REMOVED]
{"Source-Url": "https://inria.hal.science/inria-00537810/file/main.pdf", "len_cl100k_base": 12593, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 53964, "total-output-tokens": 14587, "length": "2e13", "weborganizer": {"__label__adult": 0.00034689903259277344, "__label__art_design": 0.00035691261291503906, "__label__crime_law": 0.0003428459167480469, "__label__education_jobs": 0.0005831718444824219, "__label__entertainment": 5.9723854064941406e-05, "__label__fashion_beauty": 0.00014591217041015625, "__label__finance_business": 0.00017690658569335938, "__label__food_dining": 0.0003612041473388672, "__label__games": 0.0005507469177246094, "__label__hardware": 0.0007772445678710938, "__label__health": 0.0004494190216064453, "__label__history": 0.00021970272064208984, "__label__home_hobbies": 9.882450103759766e-05, "__label__industrial": 0.00039768218994140625, "__label__literature": 0.00026988983154296875, "__label__politics": 0.0002925395965576172, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.0172119140625, "__label__social_life": 8.821487426757812e-05, "__label__software": 0.004009246826171875, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0002980232238769531, "__label__transportation": 0.0006132125854492188, "__label__travel": 0.00020325183868408203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47028, 0.02607]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47028, 0.24037]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47028, 0.76725]], "google_gemma-3-12b-it_contains_pii": [[0, 933, false], [933, 3654, null], [3654, 7061, null], [7061, 9842, null], [9842, 13179, null], [13179, 15777, null], [15777, 18960, null], [18960, 22239, null], [22239, 24049, null], [24049, 28098, null], [28098, 30155, null], [30155, 32733, null], [32733, 35478, null], [35478, 37399, null], [37399, 40595, null], [40595, 43822, null], [43822, 47028, null]], "google_gemma-3-12b-it_is_public_document": [[0, 933, true], [933, 3654, null], [3654, 7061, null], [7061, 9842, null], [9842, 13179, null], [13179, 15777, null], [15777, 18960, null], [18960, 22239, null], [22239, 24049, null], [24049, 28098, null], [28098, 30155, null], [30155, 32733, null], [32733, 35478, null], [35478, 37399, null], [37399, 40595, null], [40595, 43822, null], [43822, 47028, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47028, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47028, null]], "pdf_page_numbers": [[0, 933, 1], [933, 3654, 2], [3654, 7061, 3], [7061, 9842, 4], [9842, 13179, 5], [13179, 15777, 6], [15777, 18960, 7], [18960, 22239, 8], [22239, 24049, 9], [24049, 28098, 10], [28098, 30155, 11], [30155, 32733, 12], [32733, 35478, 13], [35478, 37399, 14], [37399, 40595, 15], [40595, 43822, 16], [43822, 47028, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47028, 0.04545]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
2b56224f93aee297a2fd5a30a323154fb598237d
Verification of the Tesla protocol in MCMAS-X Alessio Lomuscio\textsuperscript{1,*}, Franco Raimondi\textsuperscript{1} and Božena Woźna\textsuperscript{2,**} \textsuperscript{1} Department of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom email: \{A.Lomuscio,F.Raimondi\}@cs.ucl.ac.uk \textsuperscript{2} IMCS, Jan Długosz University Armii Krajowej 13/15, 42-200 Częstochowa, Poland. B.Wozna@mp.ajd.czest.pl Abstract. We present MCMAS-X, an extension of the OBDD-based model checker MCMAS for multi-agent systems, to explicit and deductive knowledge. We use MCMAS-X to verify authentication properties in the Tesla secure stream protocol. 1 Introduction Model checking has traditionally been used for the verification of reactive systems whose properties are specified in one of the many variants of temporal logic. But autonomous and open systems, such as multi-agent systems [25] are best described and reasoned about by richer formalisms whose study is often pursued in frameworks studied in Artificial Intelligence (AI). One of the richer logics used in AI for this task is epistemic logic, or logic for knowledge [7] often combined with temporal logic [16, 9, 17, 14]. Epistemic logic has been shown useful in the modelling of a variety of scenarios from robotics, communication, etc, all sharing the need to represent formally the knowledge of the agents. Also of great interest is the use of temporal-epistemic formalisms to represent and analyse formally security protocols. While the original BAN logics [5] lacked computational grounding, more recent attempts [10, 15] provide a full trace-based semantics to interpret the epistemic modalities as well as standard temporal modalities. Key to these approaches is the use of not only a modality for implicit knowledge, representing the knowledge that can be ascribed to a principal from an external point of view, but also one of explicit knowledge [7, 21, 13]. While model checkers for standard temporal (implicit) knowledge have recently been made available [8, 19, 12], they currently do not support explicit knowledge and derivable notions and so their applicability to an “epistemically-oriented” verification of security protocols has not been pursued yet\textsuperscript{1}. * The authors acknowledge support from the EPSRC (grant GR/S49353 and grant CNA 04/04) and the Nuffield Foundation (grant NAL/690/G). ** The work presented here was prepared while B. Woźna was at University College London working on EPSRC grant GR/S49353. \textsuperscript{1} Anonymity protocols (such as the dining cryptographers) can successfully be analysed by using implicit knowledge only\textsuperscript{8, 24, 11}. The aim of this research note is twofold: first we present a model checker that supports modalities for explicit and deductive knowledge; second we on the use of these techniques to validate the correctness of TESLA [20], a protocol for secure real-time streaming. The work presented here builds upon our earlier analysis of TESLA[15] and our engineering of MCMAS [12], a symbolic model checker for multi-agent systems. The rest of the paper is organised as follows. In Section 2 we present syntax and semantics of the logic formalism used throughout the paper. In Section 3 we briefly present MCMAS-x. In Section 4 we introduce the TESLA protocol and in Section 5 we model check some of its key properties. We conclude in Section 6 by discussing experimental results. 2 A Temporal Epistemic Logic We shortly present the syntax and semantics of TDL [15], a multi-modal temporal epistemic logic with security-specialised primitives; we assume familiarity with the intuitive meaning of basic cryptographic primitives like keys, nonces, pseudo-random functions, and MAC functions. This section summarises material in [15]. Syntax. We begin with the definition of messages, which constitute a base for the security-specialised part of TDL. Assume the following disjoint sets: a set $K = \{k_1, k_2, \ldots\}$ of symmetric and asymmetric keys, a set $N$ of nonces, a set $T = \{t_1, t_2, \ldots\}$ of plain-texts, and a set $F$ of commitments to keys defined by $\{f(k) \mid k \in K\}$ where $f : K \rightarrow \{0, 1, \ldots\}$ is a pseudo-random function; the commitment to a key $k$ is an integer value that is computed by applying a pseudo-random function $f$ to key $k$. It is assumed that $f^{-1}$ cannot be computed from $f$, so the key $k$ cannot be computed from the commitment to $k$. The set of messages $M$ is defined by the following grammar: $$m := t \mid k \mid n \mid f(k) \mid m \cdot m \mid \{m\}_k \mid \text{MAC}(k, m)$$ where $t \in T$, $k \in K$, $n \in N$, $f(k) \in F$, $m$ is a generic message, and $\text{MAC} : K \times M \rightarrow \{0, 1, \ldots\}$ is a message authentication code function. Again, we assume that the inverse of $\text{MAC}$ cannot be computed (so the key $k$ cannot be inferred from the MAC value). We write $m \cdot m'$ for the concatenation of $m$ and $m'$, $\{m\}_k$ for encryption of $m$ with the key $k$, and $\text{MAC}(k, m)$ for the message authentication code of $m$ and $k$. We assume that the set $K$ is closed under inverses, i.e., for a given key $k \in K$ there is an inverse key $k^{-1} \in K$ such that $\{m\}_{k^{-1}} = m$. If the cryptosystem uses symmetric keys, then $k = k^{-1}$; for the public cryptosystem $k$ and $k^{-1}$ are different. We also define a submessage binary relation $\sqsubseteq$ on $M$ as the smallest reflexive and transitive relation satisfying the following conditions: (1) $m \sqsubseteq m \cdot m'$, (2) $m \sqsubseteq m' \cdot m$, (3) $m \sqsubseteq \{m\}_k$. Let $PV$ be a set of propositional variables, $AG$ a finite set of agents, $p \in PV$, $i \in AG$, and $m \in M$. The set $WF(TDL)$ of well-formed TDL formulas is defined by the following grammar: \[ \varphi := p \mid \text{has}(m) \mid \text{sent}_i(m) \mid \text{received}_i(m) \mid \text{faked}_i(m) \mid \text{dropped}_i(m) \\ \neg \varphi \mid \varphi \lor \varphi \mid E \varphi \mid E(\varphi) \mid A(\varphi) \mid K_i \varphi \mid \mathcal{X}_i \varphi \mid A_i \varphi \] The meaning of temporal and epistemic operators is standard. We shall further use the shortcut \(D_i \alpha\) to represent \(E(K_i \alpha) \mathcal{X}_i \alpha\). The formula \(D_i \varphi\) is read as “agent \(i\) may deduce \(\alpha\) (by some computational process)” For more details we refer to [15, 13]. **Interpreted Systems.** In this section we will briefly summarise the multi-agent framework [7], over which a semantics for TDL will be given. In particular, we will focus on a specific class of multi-agent systems, appropriate to modelling security protocols. These are message-passing systems in which one or more of the agents is an adversary controlling the communication channel. A multi-agent system (MAS) consists of \(n\) agents and an environment, each of which is in some particular local state at a given point in time. We assume that an agent’s local state encapsulates all the information the agent has access to, and the local states of the environment describe information that is relevant to the system but that is not included in any local agent’s state; the environment can be viewed as just another agent, as we will do here. In the security settings, in particular here, we assume that the local state of an agent is a sequence of events of the form \((e_0, \ldots, e_m)\), where \(e_0\) is the initial event, and for \(i \in \{1, \ldots, m\}\), \(e_i\) is a term of the form \(\text{sent}(i, m)\) or \(\text{recv}(m)\), where \(m\) is a message and \(i\) is an agent. The term \(\text{sent}(i, m)\) stands for the agent has sent message \(m\) to agent \(i\). Similarly the term \(\text{recv}(m)\) represents that the agent has received message \(m\). Note that in \(\text{recv}(m)\) the sender is not specified. This is because the receiver will not in general be able to determine the sender of a message he has received. A multi-agent system is not a static entity. Its computations are usually defined by means of runs (see [7]), where a run is a sequence of all the possible global states. Thus, in these settings, an interpreted system for a multi-agent system is defined as a set of runs together with a valuation function for the propositional variables of the language under consideration. We interpret TDL on an extension of interpreted systems representing awareness sets; for more details we refer to [7, 15]. **Definition 1 (Interpreted system).** Let \(\mathcal{AG}\) be a finite set of \(n\) agents, and let each agent \(i \in \mathcal{AG}\) be associated with a set of local states \(L_i\), and the environment be associated with a set of local states \(L_e\). Then, an interpreted system is a tuple \(M = (S, T, \sim_1, \ldots, \sim_n, \mathcal{V}, \mathcal{A}_1, \ldots, \mathcal{A}_n)\) such that \(S \subseteq \prod_{i=1}^n L_i \times L_e\) is a set of global states, \(T \subseteq S \times S\) is a serial (temporal) relation on \(S\), for each agent \(i \in \mathcal{AG}\), \(\sim_i \subseteq S \times S\) is an equivalence (epistemic) relation defined by: \(s \sim_i s'\) iff \(l_i(s') = l_i(s)\), where \(l_i : S \rightarrow L_i\) is a function that returns the local state of agent \(i\) from a global state, \(\mathcal{V} : S \rightarrow 2^{PV}\) is a valuation function, and \(\mathcal{A}_i : L_i \rightarrow 2^{\mathcal{WF}(TDL)}\) is an awareness function assigning a set of formulas to each state, for each \(i \in \mathcal{AG}\). Awareness sets represent facts (expressed as TDL formulas) an agent is aware of at a given state; we refer to [7, 15] for more details. **Satisfaction.** A path in $M$ is an infinite sequence $\pi = (s_0, s_1, \ldots)$ of global states such that $(s_i, s_{i+1}) \in T$ for each $i \in \mathbb{N}$. For a path $\pi = (s_0, s_1, \ldots)$, we take $\pi(k) = s_k$. By $\Pi(s)$ we denote the set of all the paths starting at $s \in S$. **Definition 2 (Satisfaction).** Let $M$ be an interpreted system, $s$ a state, and $\alpha$, $\beta$ TDL formulas. The satisfaction relation $\models$, indicating truth of a formula in $M$ at state $s$, is defined inductively as follows: $(M, s) \models p$ iff $p \in V(s)$, $(M, s) \models \alpha \lor \beta$ iff $(M, s) \models \alpha$ or $(M, s) \models \beta$, $(M, s) \models \neg \alpha$ iff $(M, s) \not\models \alpha$, $(M, s) \models E(\alpha (\beta))$ iff $\exists \pi \in \Pi(s) \exists m (\exists m' \in M) (\exists j \in A \varphi)$ such that $m \subseteq m'$ and $sent(j, m') \in l_i(s)$, $(M, s) \models \alpha \land \beta$ iff $\forall \pi \in \Pi(s) \forall m (\forall j \in A \varphi)$ such that $m \subseteq m'$ and $sent(j, m') \in l_i(s)$. We leave definitions of the other security-specialised propositions open. Namely, for distinct protocols these propositions will be defined differently. They are not needed for the analysis of the TDL presented below. Let $M$ be an interpreted system. We say that a TDL formula $\varphi$ is valid in $M$ or $M$ is a model for $\varphi$ (written $M \models \varphi$), if $M, s \models \varphi$ for all states $s \in S$. ### 3 The model checkers MCMAS and MCMAS-X **Overview of MCMAS.** MCMAS is a symbolic model checker for multi-agent systems developed for the automatic verification of temporal and epistemic modalities in interpreted systems [7] as well as other modalities to reason about strategies and correct behaviour of agents [24]. MCMAS implements efficient verification algorithms based on ordered binary decision diagrams (OBDDs, see [4] for more details). An input to MCMAS is a program written in ISPL (Interpreted Systems Programming Language) representing all possible evolution of the system under analysis. ISPL is an SMV-like programming language for the description of interpreted systems. An ISPL program contains a list of agents, each of which is declared by reserved keywords: ``` Agent <AgentID> <AgBody> end Agent ``` Above `<AgentID>` is any string uniquely identifying an agent, and `<AgBody>` contains the declarations of the local states, the actions, the protocols, and the evolution function for the agent. Following the agents’ declaration, an ISPL file includes sections to declare the set of initial states, the evaluation function, and the set of formulae to be verified. Figure 1 reports the definition of a simple agent; we refer to the documentation available [22] for more details about the ISPL language. Agent SampleAgent - Lstate = \{s0,s1\}; Lgreen = \{s0,s1\}; Action = \{a1,a2\}; - Protocol: - s0: \{a1\}; - s1: \{a1, a2\}; - end Protocol - Ev: - s1 if (Lstate=s0 and Action=a1 and AnotherAgent.Action=a7); - s0 if (Lstate=s1 and Action=a1); - end Ev end Agent Fig. 1. An agent’s definition using ISPL. MCMAS is available under the terms of the GNU General Public License (GPL) and it has been compiled on a number of platform. MCMAS is run from the command line and it accepts various input parameters to inspect and fine-tune its performance. MCMAS-X: an extensions of MCMAS. MCMAS-X extends MCMAS to support the verification of the operators $X_i$, $A_i$, and $D_i$ (see Section 2). Given an interpreted system $M$, let $[[\varphi]]$ denote the set of global states of $M$ which $\varphi$ holds. By the definition of satisfiability given in Section 2, we have: $$[[A_i(\varphi)]] = \{s \in S | \varphi \in A_i(l_i(s))\}$$ Using standard procedures (e.g., see [6, 24]) this definition can be re-casted in terms of OBDDs, so that the set $[[A_i(\varphi)]]$ can be expressed as an OBDD. Consequently, the sets of states $[[X_i(\varphi)]]$ and $[[D_i(\varphi)]]$ can be expressed using OBDDs, too. We have implemented software procedures to perform the computation of these sets automatically in a tool called MCMAS-X available for download [23]. MCMAS-X extends MCMAS's syntax in two ways: first, it supports the verification of all the formulae introduced in Section 2; second, it augments the description of an agent with the definition of the function $A_i$. This latter step is achieved by introducing the keywords ``` Aware: <definitions> end Aware ``` as exemplified in Figure 2. In this example, the agent SampleAgent is aware of propositions $p1$ and $p2$ in local state $s0$, and of proposition $p2$ in local state $s1$. Note that, following the definitions of Section 2 no consistency checks are made when defining awareness sets. Agent SampleAgent \[ Lstate = \{s0, s1, s2, s3\}; \quad Lgreen = \{s0, s1, s2\}; \quad Action = \{a1, a2, a3\}; \] Protocol: \[ ... \] end Protocol Ev: \[ ... \] end Ev Aware: \[ s0 : \{p1, p2\}; \quad s1 : \{p2\}; \quad end\ Aware \] end Agent \textbf{Fig. 2.} An agent's definition using ISPI in MCMAS-x. \section{The Tesla protocol} In this section we introduce the \textit{timed efficient stream loss-tolerant authentication} (TESLA) protocol \cite{20}. TESLA provides secure authentication of the source of each packet in multicast or broadcast data streams. Five schemes of the protocol exist; each assumes a single sender (S) broadcasting a continuous stream of packets to receivers (R) acting independently of one another; below we will describe the first variant of the TESLA protocol, and we will take into consideration one receiver only. In order to provide security, in TESLA it is assumed that: (1) the sender and the receiver must be loosely time-synchronized; this can be done via a simple two-message exchange using, for example, the NTP protocol \cite{18}; (2) the protocol must be bootstrapped through a regular data authentication system; this can be done using any secure session initiation protocol; (3) the protocol uses cryptographic primitives like MAC values and pseudo-random functions (PRFs); MAC is computed by a \textit{message authentication code} function that takes as input a message and a secret key, whereas PRF provides \textit{commitments} to keys. It is assumed that S and R know the PRF as well as the message authentication code function to be used in the session. Following \cite{1,3}, we now outline a TESLA scheme assuming that the protocol uses one pseudo-random function only, the participants are initially synchronised, R knows the disclosure schedule of the keys, and S sends packets at regular intervals that are agreed with R during the synchronisation process. More details are in \cite{20}. Let \([x, y]\) denote the concatenation of \(x\) and \(y\). Assuming that S has a digital signature key pair, with private key \(k_{Si}^{-1}\) and public key \(kS_i\) known to R, and that R chooses a random and unpredictable nonce, the initial \(n\) steps, for \(n > 1\), of the protocol for one sender and one receiver are the following: \begin{enumerate} \item (-1) R \rightarrow S : nR \item (0) S \rightarrow R : \{f(k_1), nR\}_{kS_{i-1}} \item (1) S \rightarrow R : [P1, MAC(k_1, P1)], for \(P1 = [t_1, f(k_2)]\) \item (2) S \rightarrow R : [P2, MAC(k_2, P2)], for \(P2 = [t_2, f(k_3), k_1]\) \end{enumerate} \[ \vdots \] \begin{enumerate} \item (n) S \rightarrow R : [P_n, MAC(k_n, P_n)], for \(P_n = [t_n, f(k_{n+1}), k_{n-1}]\) \end{enumerate} As one can see from the above, with the exception of the two initial packets, which are used to bootstrap the broadcasting process, each packet contains: (1) the message \(t_i\) to be delivered; (2) a \textit{commitment} \(f(k_i)\) to the key to be used to encode the MAC of the next packets; (3) the key $k_i$ that was used to encode the MAC of the previous sent packet; (4) the MAC $\text{MAC}(k_i, P_i)$ of the current packet. Tesla guarantees, among others, the following security property: "the receiver does not accept as authentic any message unless it was actually sent by the sender". We verify this and other properties by means of MCMAS-X in the next section. 5 The Tesla protocol and MCMAS-X In the section we model check the Tesla protocol by means of MCMAS-X. To do this we define and encode an interpreted system $M = (S, T, \sim_S, \sim_R, \sim_I, \forall, A_S, A_R, A_I)$ representing Tesla’s executions. Given our state space needs be finite we set a limit $n$ to the number of packets that can be broadcast during one session; obviously this assumption does not affect the analysis as no attack depends on the number of broadcasted packets. As defined in Section 4, the Tesla protocol involves two participants: a sender ($S$) and a receiver ($R$), communicating through an unreliable channel that is under complete control of an intruder ($I$). In the interpreted system framework it is convenient to see the principals as agents, and the intruder as the environment. While specifying the agents (i.e., defining a set of local states, a set of actions, a protocol, and an evolution function), we assume that $S$ has all the information he needs to prepare a packet, i.e., he has a complete set of messages $M_S \subseteq M$. We also assume that $M_S$ constitutes $S$’s initial database that remains accessible to him throughout the run. Moreover, we assume that $I$ has all the information needed to prepare well-formed packets, with $M_I \subseteq M$ such that $M_I \cap M_S = \emptyset$, and we assume that $M_I$ can grow during the run. We work with a Dolev-Yao intruder in control of the channel and able to encrypt and decrypt messages if he has the appropriate key. We assume the intruder sends (resend and fakes) well-formed packets only, i.e., any packet contains a message body, a key commitment, a key, and an appropriate MAC value. Finally, we assume that $S$, $R$, and $I$ use a shared PRF and a shared MAC function, $R$ and $I$ know the public key of $S$, $S$ and $I$ begin with disjoint sets of keys, and that $R$ knows the precise schedule of packets, and that this information is incorporated into the first packet $P_0$, which cannot be dropped or faked. We introduce the following sets of local states for $S$, $R$ and $I$, respectively: \[ L_S = \{[], [\text{recv}(n_R)], [\text{sent}(R, P_0)]\} \cup \{[\text{sent}(R, P_{i-1}), \text{sent}(R, P_i)] \mid 0 < i \leq n\} \cup \{[\text{sent}(R, P_{i-1}), \text{sent}(R, P_i), \text{sent}(R, P_{i+1})] \mid 0 < i \leq n\}. \] \[ L_R = \{[], [\text{sent}(S, n_R)], [\text{stop}], [\text{recv}(P_0)]\} \cup \{[\text{recv}(P_0), \text{recv}(P_2)], \text{recv}(P_i)] \mid 0 \leq i \leq n\}\cup \{[\text{recv}(P_i), \text{recv}(P_{i+1})] \mid 0 \leq i \leq n\}\cup \{[\text{recv}(P_{i-1}), \text{recv}(P_i)], \text{recv}(P_{i+2})] \mid 0 \leq i \leq n\}\cup \{[\text{recv}(P_i), \text{recv}(P_{i+1}), \text{recv}(P_{i+2})] \mid 0 \leq i \leq n\}\cup \{[\text{recv}(P_0), \text{recv}(P_{i+1})] \mid 0 \leq i \leq n\}\cup \{[\text{recv}(P_0), \text{recv}(P_{i+1}), \text{recv}(P_{i+2})] \mid 0 \leq i \leq n\}. \] \[ L_1 = \{ [], [\text{recv}(n_R)], [\text{recv}(P_0)] \} \cup \{ [\text{recv}(P_0), \text{recv}(P_1)], [\text{recv}(P_1), \text{recv}(P_{i+1})] \mid 0 \leq i \leq n \} \cup \] \[ \{ [\text{recv}(P_{i-1}), \text{recv}(P_i), \text{recv}(P_{i+1})] \mid 0 < i \leq n \} \cup \] \[ \{ [\text{recv}(P_0), \text{recv}(P_1), \text{send}(R, P_1')] \} \cup \] \[ \{ [\text{recv}(P_0), \text{recv}(P_1), \text{send}(R, P_1'), \text{recv}(P_2)] \} \cup \] \[ \{ [\text{recv}(P_0), \text{recv}(P_1), \text{send}(R, P_1'), \text{recv}(P_2), \text{send}(R, P_2')] \} \cup \] \[ \{ [\text{recv}(P_{i-1}), \text{recv}(P_i), \text{recv}(P_{i+1}), \text{send}(R, P_{i+1}')] \mid 0 < i \leq n \}. \] and the following sets of actions, performed in compliance with the description in Section 4: - \( \text{Act}_S = \{ \lambda \} \cup \{ \text{send}P_i, \text{accept}P_i \mid 0 < i \leq n \} \). - \( \text{Act}_R = \{ \lambda, \text{nonce}, \text{stop} \} \cup \{ \text{accept}P_i \mid 0 < i \leq n \} \). - \( \text{Act}_1 = \{ \lambda \} \cup \{ \text{drop}P_i, \text{fake}P_i, \text{accept}P_i \mid 0 < i \leq n \} \). The intuitive meaning of \( S \)'s local states is the following: \([ \cdot \] represents \( S \)'s initial state in the protocol; \([ \text{recv}(n_R)] \) represents the message sent by \( R \) in order to establish communication; \([ \text{send}(R, P_0)] \) represents the fact that \( S \) has just sent packet \( P_0 \) to \( R \); \([ \text{sent}(R, P_{i-1}), \text{sent}(R, P_i)] \) and \([ \text{sent}(R, P_{i-1}), \text{sent}(R, P_i), \text{send}(R, P_{i+1})] \) represent fact that \( S \) has sent packets \( P_j \), where \( j \leq i+1 \) and \( 0 < i \leq n \). With regards to \( S \)'s actions, action \( \lambda \) is the null-action, \( \text{send}P_i \) stands for \( S \) sending packet \( P_i \), and \( \text{accept}P_i \) represents that \( S \) recognises packet \( P_i \) as accepted by the receiver. \( R \)'s local states above stand for the following: \([ \cdot \] represents \( R \)'s initial state in the protocol; \([ \text{sent}(S, n_R)] \) represents the fact that \( R \) has just sent the nonce \( n_R \) to \( S \) and he is waiting for packets; \([ \text{stop} \] represents the fact that \( R \) has just stopped collecting packets; \([ \text{recv}(P_0)] \), \([ \text{recv}(P_0), \text{recv}(P_2)] \), \([ \text{recv}(P_1), \text{recv}(P_3)] \), \([ \text{recv}(P_{i-1}), \text{recv}(P_i), \text{recv}(P_{i+2})] \) and \([ \text{recv}(P_{i-1}), \text{recv}(P_i), \text{recv}(P_{i+1})] \) represent the packets \( R \) has received from \( S \); \([ \text{recv}(P_0), \text{recv}(P_1')] \), \([ \text{recv}(P_0), \text{recv}(P_1'), \text{recv}(P_2)] \), and \([ \text{recv}(P_1), \text{recv}(P_{i+1}), \text{recv}(P_{i+2})] \) represent the faked packets \( R \) has received. As regards to \( R \)'s actions, \( \text{accept}P_i \) represents \( R \) accepting packet \( P_i \) as authentic; the other action names have intuitive correspondences. For what concerns \( I \), \([ \cdot \] represents \( I \)'s initial state in the protocol; \([ \text{recv}(n_R)] \) stands for \( I \)'s state following the interception of \( R \)'s initial message to \( S \); \([ \text{recv}(P_0)] \), \([ \text{recv}(P_0), \text{recv}(P_1)] \) and \([ \text{recv}(P_{i-1}), \text{recv}(P_i), \text{recv}(P_{i+1})] \) represent the packets intercepted by \( I \); \([ \text{recv}(P_0), \text{recv}(P_1), \text{send}(R, P_1')] \), \([ \text{recv}(P_0), \text{recv}(P_1), \text{send}(R, P_1'), \text{recv}(P_2)] \), \([ \text{recv}(P_0), \text{recv}(P_1), \text{send}(R, P_1'), \text{recv}(P_2), \text{send}(R, P_2')] \), and \([ \text{recv}(P_{i-1}), \text{recv}(P_i), \text{recv}(P_{i+1}), \text{send}(R, P_{i+1}')] \) represent the packets intercepted by \( I \) and their faked versions. The action \( \text{accept}P_i \) denotes the fact that intruder is not able to fake or drop the packet \( P_i \); \( \text{drop}P_i \) (respectively \( \text{fake}P_i \)) encodes the action of \( I \) dropping (respectively faking) packet \( P_i \). We have now defined the set of states and set of actions for the multi-agent system representing Tesla, so we can describe how the protocol evolves. In the multi-agents settings this is defined by means of an evolution function \( t : S \times \text{Act} \rightarrow 2^{L_S \times L_R \times L_1} \), where \( \text{Act} \subseteq \text{Act}_S \times \text{Act}_R \times \text{Act}_I \) and \( S \subseteq (L_S \times L_R \times L_1) \). The function \( t \) gives the transition relation \( T \); namely, for all the \( s, s' \in S \), (s, s') ∈ T if there exists an act ∈ Act such that t(s, act) = s'. We do not report here the full evolution function for TESLA; this can be found in [15]. To finalise the description of the interpreted system M for TESLA, we have to define a valuation function V : S → 2^PV and the awareness functions AX : LX → 2^WF(TDL), for X ∈ \{S, R, I\}. We first introduce the following set PV of propositional variables, which we find useful in analysis of the TESLA scenario: \[ PV = \{ has_R(m), sent_S(m), received_R(m), dropped_l(m), faked_l(m) \mid m ∈ M \} \] We define V : S → 2^PV as follows: - has_R(t_i) ∈ V(s) if there exist packets P_{i-1}, P_i and P_{i+1} such that f(k_i) ⊆ P_{i-1}, t_i ⊆ P_i, k_i ⊆ P_{i+1}, recv(P_{i-1}) ∈ l_R(s), recv(P_i) ∈ l_R(s) and recv(P_{i+1}) ∈ l_R(s), - sent_S(m) ∈ V(s) if there exists packet P_i such that m ⊆ P_i and sent_R(R, P_i) ∈ l_S(s), for any m ∈ M_S, - received_R(m) ∈ V(s) if recv(m) ∈ l_R(s), for any m ∈ M_S ∪ M_I, - dropped_l(m) ∈ V(s) if recv(m) ∉ l_R(s) and recv(m) ∈ l_I(s), for any m ∈ M_S, - faked_l(m) ∈ V(s) if there exist packets P_j such that m ⊆ P_j and send_R(R, P_j) ∈ l_I(s), for any m ∈ M_S. For R we take the following awareness function AR : LR → 2^WF(TDL). Let l ∈ LR and α be a TDL formula. Then, α ∈ AR(l) if: - α = received_R(m) and recv(m) ∈ l and m ∈ M_S ∪ M_I, - α = faked_l(m) and l = [stop] and m ∈ M_S ∪ M_I, - α = dropped_l(m) and l = [stop] and m ∈ M_S, - α = has_R(m) and (recv(m) ∈ l or ∃m' such that m ⊆ m' and recv(m') ∈ l) and m ∈ M_S ∪ M_I. For X ∈ \{S, I\}, the awareness function AX : LX → 2^WF(TDL) is the following: for any l ∈ LX, AX(l) = ∅. To generate automatically the above interpreted system of TESLA we have produced a C++ program that given the number n of packets generates the corresponding ISPL code (see Figure 3) to be used with MCMAS-X. In this way we can generate a number of instances of the protocol which can help evaluate the performance of MCMAS-X. Given the interpreted system M of TESLA as defined above, we now set out to check by means of MCMAS-X all the properties examined in [15]. First we would like to establish whether or not TESLA satisfies the desired security property: “the receiver does not accept as authentic any message unless it was actually sent by the sender”, i.e., whether or not M is a model for the following TDL formula: for any 0 < i < n, \[ has_R(t_i) \Rightarrow (sent_S(P_{i-1}) \land sent_S(P_i) \land sent_S(P_{i+1})) \quad (1) \] Next we would like to check whether or not TESLA satisfies the stronger property “the receiver does not accept as authentic any message unless he knows that it was actually sent by the sender”. This is expressed by the following TDL formula: for any 0 < i < n, Agent Receiver Lstate={empty,send_s_nr,stop,recv_p0,recv_p0_recv_p1,recv_p0_recv_p2,...}; Action = {nothing,nonce,stop,accept_p1,accept_p2}; Protocol: empty : {nonce}; recv_p0 : {nothing}; send_s_nr : {nothing}; stop : {stop}; recv_p0_recv_p2 : {stop}; recv_p0_recv_p1 : {nothing}; end Protocol Ev: stop if ((Lstate=stop and Action=stop and Sender.Action=nothing and Intruder.Action=nothing) or (Lstate=recv_p0_recv_p2 and Action=stop and Sender.Action=nothing and Intruder.Action=nothing) or ...); end Ev Aware: recv_p0 : {received_r_p0,has_r_p0}; recv_p0_recv_p1 : {received_r_p0,received_r_p1,has_r_p0,has_r_p1}; recv_p0_recv_p2 : {received_r_p0,has_r_p0,received_r_p2,has_r_p2}; end Aware end Agent Fig. 3. A fragment of R’s definition in the ISPL format for n = 2. \[ \text{has}_R(t_i) \Rightarrow K_R(s_{sent}(P_{i-1}) \land s_{sent}(P_i) \land s_{sent}(P_{i+1})) \] \hspace{1cm} (2) Further, we would like to check whether Tesla meets the following properties: (3) “it is always the case that the receiver does not accept as authentic any message unless he knows that it was actually sent by the sender”. (4) “the principals know about the presence of the intruder”. (5) the receiver is able to check the source of messages, i.e., “if a packet is faked, then the receiver would deduce this”. (6) “if the receiver receives some packets P_{i-1}, P_i, and P_{i+1} with a message t_i \subseteq P_i, and he does not accept t_i as authentic, then he knows that at least one of the packets was not sent by the intended sender”. In other words, if a packet was indeed faked, the receiver is able to deduce this fact. (7) “the intruder has to send a packet at each interval, which was agreed by the sender and the receiver at the beginning of the transmission under consideration”. The properties above can be expressed in a temporal-epistemic language by means of the formulas below. \[ \Box(\text{has}_R(t_i) \Rightarrow K_R(s_{sent}(P_{i-1}) \land s_{sent}(P_i) \land s_{sent}(P_{i+1}))) \] \hspace{1cm} (3) \[ K_{SE}(s_{sent}(P_i) \land \neg s_{received}_R(P_i)) \] \hspace{1cm} (4) \[ \text{faked}_i(P_i) \Rightarrow D_R(\text{faked}_i(P_i)) \] \hspace{1cm} (5) \[ (\text{received}_R(P_{i-1}) \land \text{received}_R(P_i) \land \text{received}_R(P_{i+1}) \land \neg \text{has}_R(t_i)) \Rightarrow \] \hspace{1cm} (6) \[ (K_R(\neg s_{sent}(P_{i-1}) \lor \neg s_{sent}(P_i) \lor \neg s_{sent}(P_{i+1})) \land \] 6 Experimental results and conclusions We have employed the ISPL generator defined in the previous section to create a number of instances of the TESLA protocol, from 5 to 320 steps. We have verified all formulas above for all steps analysed, demonstrating the correctness of TESLA with respect to the specifications above. While process algebras [3] and Lynch-Vaandrager automata [1] have previously been used to analyse the protocol, our results demonstrate the correctness of it with respect to the temporal-epistemic specifications above. MCMAS-X uses OBDDs to verify the properties. Consequently most of the computational time spent by the model checker is used to construct a symbolic representation of the model for the system. Table 1 reports some experimental results obtained using a MacBook Pro equipped with a 2.1GHz Intel processor, 2GBytes of RAM, running Mac OS X 10.4.6. The first column reports the number of packets, the second column contains the time required for the verification, while the third and the fourth column provide information about space requirements. In particular, column three lists the number of variables required to encode the example: from this value the size of the model can be deducted. For instance, 85 Boolean variables are required when \( n = 200 \), corresponding to a model of size \( 2^{85} \approx 4 \cdot 10^{25} \). The last column reports the actual memory used by MCMAS-X. <table> <thead> <tr> <th>N. of packets</th> <th>Time (sec)</th> <th>N. of BDD variables</th> <th>Memory (bytes)</th> </tr> </thead> <tbody> <tr> <td>5</td> <td>2</td> <td>40</td> <td>4612376</td> </tr> <tr> <td>10</td> <td>3</td> <td>48</td> <td>4737832</td> </tr> <tr> <td>20</td> <td>8</td> <td>55</td> <td>5644888</td> </tr> <tr> <td>50</td> <td>25</td> <td>67</td> <td>6562280</td> </tr> <tr> <td>100</td> <td>38</td> <td>76</td> <td>9572968</td> </tr> <tr> <td>150</td> <td>77</td> <td>82</td> <td>9191848</td> </tr> <tr> <td>200</td> <td>92</td> <td>85</td> <td>10674616</td> </tr> <tr> <td>250</td> <td>110</td> <td>91</td> <td>11481224</td> </tr> <tr> <td>320</td> <td>190</td> <td>91</td> <td>15703560</td> </tr> </tbody> </table> Table 1. Experimental results. Figure 4 depicts all the experimental results for time and memory requirements. The oscillating behaviour of the memory requirements shown in the figure is justified by the heuristic techniques employed in the construction of OBDDs (a similar behaviour was observed for a different example in [11]). Nevertheless, an increasing trend is evident, especially for time requirements (dotted line). Given that no other model checker is available to verify explicit knowledge we cannot offer a direct comparison of the results above. On their own they do seem adequate. Obviously, other specialised model checkers exist to verify temporal only properties (or simply reachability) for security protocols, notably AVISPA [2], but given the different emphasis in the two approaches it would not seem appropriate to compare experimental results. References
{"Source-Url": "http://www.wozna.org/papers/06/csp06.pdf", "len_cl100k_base": 10108, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 57556, "total-output-tokens": 12455, "length": "2e13", "weborganizer": {"__label__adult": 0.0004320144653320313, "__label__art_design": 0.00045990943908691406, "__label__crime_law": 0.0011749267578125, "__label__education_jobs": 0.0008111000061035156, "__label__entertainment": 0.00014472007751464844, "__label__fashion_beauty": 0.0002034902572631836, "__label__finance_business": 0.0005750656127929688, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.0010137557983398438, "__label__hardware": 0.00199127197265625, "__label__health": 0.0009202957153320312, "__label__history": 0.0003898143768310547, "__label__home_hobbies": 0.0001773834228515625, "__label__industrial": 0.0010652542114257812, "__label__literature": 0.0004634857177734375, "__label__politics": 0.0006031990051269531, "__label__religion": 0.0006031990051269531, "__label__science_tech": 0.435791015625, "__label__social_life": 0.00013959407806396484, "__label__software": 0.01313018798828125, "__label__software_dev": 0.5380859375, "__label__sports_fitness": 0.00036716461181640625, "__label__transportation": 0.0009360313415527344, "__label__travel": 0.00021219253540039065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37901, 0.0317]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37901, 0.35907]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37901, 0.78481]], "google_gemma-3-12b-it_contains_pii": [[0, 2715, false], [2715, 5870, null], [5870, 9688, null], [9688, 12332, null], [12332, 14465, null], [14465, 17428, null], [17428, 20772, null], [20772, 25379, null], [25379, 28118, null], [28118, 30541, null], [30541, 33170, null], [33170, 34686, null], [34686, 37901, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2715, true], [2715, 5870, null], [5870, 9688, null], [9688, 12332, null], [12332, 14465, null], [14465, 17428, null], [17428, 20772, null], [20772, 25379, null], [25379, 28118, null], [28118, 30541, null], [30541, 33170, null], [33170, 34686, null], [34686, 37901, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37901, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37901, null]], "pdf_page_numbers": [[0, 2715, 1], [2715, 5870, 2], [5870, 9688, 3], [9688, 12332, 4], [12332, 14465, 5], [14465, 17428, 6], [17428, 20772, 7], [20772, 25379, 8], [25379, 28118, 9], [28118, 30541, 10], [30541, 33170, 11], [33170, 34686, 12], [34686, 37901, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37901, 0.05116]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0251a11745af3bfd75f2bdc8b74d17eb45143f41
A Conceptual Toolbox for Designing CSCW Applications Susanne Bødker Ellen Christiansen Manfred Thuring DAIMI PB - 489 December 1994 A CONCEPTUAL TOOLBOX FOR DESIGNING CSCW APPLICATIONS Susanne Bodker Dept. of Computer Science Aarhus University Ny Munkegade DK-8000 Aarhus C, DENMARK Tel: +45-89423256 E-mail: bodker@daimi.aau.dk Ellen Christiansen Dept. of Computer Science Aarhus University Ny Munkegade DK-8000 Aarhus C, DENMARK (Present address: Dept. of Communication Aalborg University Langagervej 8, box 159 DK-9100 Aalborg, DENMARK) Tel: +45-98158522 x 7119 E-mail: ellen@hum.auc.dk Manfred Thuring empirica Communications and Technology Research Oxfordstrasse 2 D-53111 Bonn, GERMANY (Present address: BIFOA, Universitaet zu Koeln, Universitaetsstrasse 45. D-50931 Koeln, GERMANY). Tel: +49-221-47603-11 E-mail: thuering@bifoa.uni-koeln.de Abstract: This paper presents a conceptual toolbox, developed to support the design of CSCW applications in a large Esprit project, EuroCODE. Here, several groups of designers work to investigate computer support for cooperative work in large use organizations, at the same time as they work to develop an open development platform for CSCW applications. The conceptual toolbox has been developed to support communication in and among these design groups, between designers and users and in future use of the open development platform. Rejecting the idea that one may design from a framework describing CSCW, the toolbox aims to support design by doing and help bridging between work with users, technical design, and insights gained from theoretical and empirical CSCW research. This is done through the construction and use of various kinds of scenarios. These scenarios, created by designers in cooperation with users, are seen as important means of communication between groups of users and designers and among groups of designers. Scenario construction is guided by empirical concerns on the one hand, i.e. actual investigatory and design work with users, and theoretical concerns on the other through checklists and prototypical examples. To help its users produce scenarios on their own, the prototypical examples are meant to trigger "good ideas" with respect to work-oriented as well as technical solutions. Going through an example, the paper illustrates how scenarios can serve as boundary objects between groups in a major research and development project like EuroCODE. Résumé: Cet article présente une boîte à outils conceptuelle. Cette boîte à outils a été développée pour soutenir la conception d'applications "CSCW" (travail coopératif assisté par ordinateur) dans un large project Esprit, EuroCODE. Dans le cas présent, plusieurs groupes de concepteurs travaillent à explorer l'assistance par ordinateur pour le travail coopératif au sein de larges organisations d'utilisateurs. En même temps, ces groupes développent une plate-forme ouverte pour développer des applications CSCW. La boîte à outils conceptuelle a été développée pour assister la communication au sein de chaque groupe et entre différents groupes, entre concepteurs et utilisateurs, et pour les utilisations futures de la plateforme ouverte de développement. Rejettant l'idée que l'on peut concevoir le CSCW à partir d'un cadre de travail purement descriptif, l'idée générale derrière cette boîte à outils est d'assister la conception expérimentale et d'aider à établir un pont entre le travail des utilisateurs, la conception pratique, et la littérature théorique et empirique à travers la construction et l'utilisation de plusieurs sortes de scénarios. Ces scénarios, créés par les concepteurs en coopération avec les utilisateurs, sont vus comme des moyens de communication importants entre les groupes d'utilisateurs et les concepteurs, et entre les groupes de concepteurs. La construction d'un scénario est guidée par les impératifs empiriques d'une part (i.e. le travail conceptuel et exploratoire avec les utilisateurs) et les impératifs théoriques d'autre part, au travers de listes de contrôle (checklists) et d'exemples prototypiques. Pour aider les utilisateurs à produire leurs propres scénarios, les exemples prototypiques visent à provoquer de "bonnes idées" relatives au travail du projet et à ses solutions techniques. Au travers d'un exemple, cet article illustre comment les scénarios peuvent servir comme délimiteurs entre groupes dans un projet majeur de recherche et de développement comme EuroCODE. **Keywords:** Cooperation in design, CSCW framework, scenarios **INTRODUCTION** This paper presents a certain design idea coming out of the Esprit III project, EuroCODE. This design idea is not a technical solution in the traditional sense, but a conceptual toolbox for supporting the design of CSCW applications in EuroCODE. EuroCODE as such aims at building a development environment for CSCW applications, consisting of an open CSCW shell, three demonstrator systems, and a CSCW framework, or as we have come to call it a conceptual toolbox [Boedker et al. 93b]. What this means is that we aim at offering support for developers at a technical and at a conceptual level. The shell provides a number of building blocks which can be used to develop a CSCW application from scratch or to enhance one of the demonstrator systems with additional functionality, and the intention of the conceptual toolbox has been to support this process. The demonstrators utilize the shell and toolbox in building three prototype applications within the application domain (two supporting cooperation in the supervision of the construction of the Great Belt Bridge [Grønbæk et al. 93], and one for long-distance radiology at a hospital, Rikshospitalet [Holmes 94]). As framework constructors we have faced the situation within EuroCODE that a) the shell and demonstrator design is distributed, geographically as well as with respect to the organization of work, b) too many persons take part for all to be involved in the investigation of, and cooperation with the actual users at Great Belt and Rikshospitalet, c) too many different competencies are needed for everybody to take an interest in empirical studies, or in theoretical CSCW. Although a --- 1 The EuroCODE name stands for European CSCW Open Development Environment. lot of our previous work has assumed that such a distribution is not the best way to proceed, we find it, nevertheless, a fact of life, in this project as well as in other large design projects. Given this situation, the EuroCODE framework has as such to serve a range of practical purposes: - to transfer knowledge within the project, - to make the knowledge gained in recent years' research in participatory design and CSCW available and working in technical design sessions, and to direct these experiences towards change, - to make the shell and demonstrators available to people outside the project, - to systematize and theoretically ground the empirical experiences, especially scenario construction, - to focus the development, - to help provoke thoughts and ideas, in scenario construction as well as in evaluation of these from a technical as well as from a work-oriented point of view. Experiences from previous projects indicated to us that a method, in the traditional sense of a universal recipe, is not a feasible path. E.g. in Utopia, traditional requirement specification was substituted partly by experimental, participatory design based on mock-ups [Boedker et al 87, Ehn 88]. [Kyng in press] promotes scenarios as a way of supporting joint idea-generation, based on experiences from EuroCoop/EuroCODE, and [Bowers & Pycock 94] describe how design ideas come out of co-construction processes between users and designers. By borrowing the cooperative approach from Utopia and combining it with theoretical knowledge in an operational form, we seek to acknowledge the value of theory-driven design without ignoring the situatedness of use. Due to the distributed nature of our project, however, our conceptual toolbox needed to move beyond this. From [Kyng in press], we derive five practical reasons for situational description in design: they are needed to grasp user experience, to get real world reference, to avoid failure due to the blindness of designers, to provide material for mock-ups, and to mediate communication throughout the design process. The EuroCODE toolbox adds to Kyng's list the idea of deliberately working with contradiction as a vehicle for creative thinking, here realized not by merging but by contrasting the work-oriented and the technical checklists. Furthermore, it adds the idea of making theoretical knowledge available through checklists and prototypical examples. Still the overall argument for applying situational description in design concerns the social and cooperative nature of the design endeavour: representation in design is not a matter of holding on to something that is going - or not going - to be computed, as much as it is a matter of providing a common focus for discussing what, how and why something is going to be computed. Though the framework was meant to serve the primary purpose of making shell and demonstrators available to people outside the project, it has found an equally challenging role in the project in finding a way of bridging between the empirical and theoretical concerns of CSCW and the more technical concerns. Theoretical considerations about how to do this has been developed in [Boedker & Christiansen in press]. Here, we will move on to present the approach, before discussing it. A CONCEPTUAL TOOLBOX IN CSCW DESIGN The conceptual framework of EuroCODE is designed to deal with creative idea generation as well as systematic evaluation of ideas. In the conceptual toolbox pointers to theoretical insight from literature, and empirical knowledge from working with users on cooperative work and computer support for cooperation are put together and made available through checklists and examples. These may be revised by the designers. Furthermore, it is suggested how to use these design artefacts in scenario making. And by recommending scenario-making hand-in-hand with use of checklists we aim to support contradiction and dialogue. Our inquiries into existing CSCW frameworks, e.g. [Schmidt & Bannon 92] have left us with the conclusion that they do not meet our requirements (see [Bødker, & Mogensen 93] because of their emphasis on defining concepts instead of demonstrating how work may become more cooperative by computer support. Thus, they are too little oriented towards change [Bødker & Christiansen in press] and do not sufficiently clarify how their concepts can be used in the process of designing CSCW applications, or where developers can find inspiration for good design ideas. Furthermore, they do not provide for the cultural, organizational and technological diversity, which is the norm in EuroCODE. To meet the requirements listed in the introduction, the toolbox is consciously organized to let different perspectives talk to each other: Theoretical concerns are applied to focus the scenarios through checklists, which helps asking questions about a specific work situation and/or a specific CSCW application, thus enabling the designers to find out relevant constraints and key-concerns. We have initially developed a work-oriented checklist, taking into consideration aspects of overview, sharing, articulation work, etc. from literature and a technical checklist, taking up similar technical concerns, partly coming out of general experiences, and partly out of the kind of technical "material" that we work with in EuroCODE (Object oriented databases, hypermedia, etc.). Each item on the checklist points to a short (1/2 page) text elaborating on the theoretical and technical concerns. These checklists can be modified through use, and more checklists may be introduced (an overview of the interplay of these components is shown in Fig 1). The checklists will be presented in more detail later. Provocation of thoughts and ideas, in scenario construction and evaluation is a matter of triggering ideas that are innovative on the one hand, but realistic and technically feasible on the other, recognizing the social, organizational and technical conditions which constrain a solution. A set of prototypical examples of CSCW technology serves to spark off good ideas for design and ground design considerations in practical experiences from literature or from the design domain. These prototypical examples have come out of applying one the one hand the work oriented checklist to constructs such as Sørgaard's shared material [Sørgaard 88], and Robinson's common artifacts [Robinson 93], and on the other hand the technical checklist to central applications of the EuroCODE shell, e.g. hypermedia. The purpose of the scenario making activity is to support the creation of a universe for creative thinking, to force certain directions of thought, rules for actions to be taken within the universe, and a concrete example upon which to perform the dialogue among the involved actors. In the toolbox, different types of scenarios are suggested, and their components and internal structure is specified. This is in order to enable designers and users to efficiently capture and communicate their ideas. Fig. 1. The checklists and examples are used when creating scenarios, scenarios are the input as well as output of a number of other design activities. We have outlined here the example that we will discuss later. **APPLYING THE TOOLBOX - AN EXAMPLE** In the following we shall give an example of how we imagine the toolbox used in a design process at Great Belt. Though fictitious, the example replicates many of the experiences from using components of the toolbox in various settings in and outside the GB construction site, and it is based on empirical studies and cooperative design in EuroCOOP/EuroCODE. Furthermore, the actual kinds of scenarios suggested in the example are similar to those applied in EuroCODE at GB [Kyng in press, Grønbæk et al. 93]. The Great Belt bridge/tunnel project consists of a railway tunnel, a roadway bridge and a combined bridge. Great Belt Link Ltd. (GBL) was established to create the organization and to produce the material for the invitation of tenders. In a later phase GBL is supervising the construction activities. When the bridges and tunnel are completed in the late '90's, the organization will be responsible for operation and maintenance. The construction is specified in thousands of pages. During construction these specifications evolve, and progress is monitored. In turn, this process involves thousands of pages of progress reports, change requests and non-conformance reports. The supervision involves three parameters: time, economy, and quality. The design idea dealt with in our example is the following: *The supervisors at the construction site at Lindholm, the West Bridge, should be able to locate and annotate the documentation of the West Bridge, through a graphical representation of the construction site.* This design idea has come out of the EuroCoop/EuroCODE analysis and design work, which includes field studies of the work of supervisors, analysis of artifacts in use (filing facilities, construc- The specific problem was initially dealt with through a visit, where two designers spent two days at the construction site: after an initial tour of the Lindholm site, they conducted preliminary interviews with four supervisors. Initially, the work-oriented checklist had been used when creating the interview guide. On the second day of the visit, they each followed a supervisor around for the day. These initial investigations were used as outset for structured brain-storm in which the same four supervisors participated for one half day together with the designers. The workshop was structured as the critique phase of a future workshop, where the participants took turns giving short statements of critique of their present work situation in the context of how they use documentation when on inspection. Scenario 1: The Present at the Lindholm Inspection Office This scenario addresses the problems regarding the present means of inspection, i.e. the filing and retrieval of construction documents, in particular regarding location and accessibility. The setting is the Lindholm inspection office today. Kurt is a supervisor at the Lindholm construction site. Together with 8-9 fellow supervisors he is supervising the prefab actions. H. office is located in one of the office-containers at the construction site. He and his colleagues are individually assigned to specific actions, but do co-operate. Kurt and John are assigned to caisson inspection. On Kurt's desk is a PC connected to the site office, which also serves as a meeting room for his group. Via the computer he can access his notes, some pictures etc., whereas he has to stay in contact with the secretary at the site office in order to get drawings, lab tests etc. Action: Kurt is planning to leave his office to go out for an inspection. While still in the container office, when he has decided which part of the production line to take a look at, he collects his notes and reaches out for the copy of the work procedure handbook for a check - just to realize that it is gone, probably because one of his colleagues has taken it along to the construction site. Out on inspection, Kurt identifies some glide casting problems on the caisson that he is inspecting. He contacts the production engineer in charge and asks what steps have been taken so far. The production engineer claims that he has already written a memo and received acceptance of delay. Kurt regrets that he has not asked the secretary for a copy of the latest correspondence - now he has to go back to his office to get the documents. Back in his office, he lays down his notes and phones the secretary, who asks for the identification number of the caisson under inspection. When telling her the number, he begins to doubt whether he has really inspected one of John's caissons as he had intended, or something else. He asks John and together they go to study the print out of 'the Lindholm program' hanging on the wall in the corridor. **Potentials, Problems and Bottlenecks:** Kurt faces problems with: - the physical location of the documents needed; - redundant archiving-the archives are both paper based and computer based, depending on location and version; - different types of material, e.g. he relies on pictures and written text as well lab test results; - re-finding material, as key-word entry quite often does not fit his current definition of a problem. However, these problems may be overcome technologically by providing flexible read and write access to the master file of inspection. **Fig. 2, Scenario 1.** A scenario (Fig. 2) focusing on problems with the current work situation was made by the designers based on the input from the workshop. In the construction of the scenario, the work checklist has --- 2 Similar to the processes described in [Bødker et al. 93a, and c, Bødker & Grønbæk 91 a and b]. 3 Future workshop activities described in [Kensing & Madsen, 91]. been used to raise critical questions to be covered by the scenario. The scenario is addressing the present, it is descriptive, similar to [Kyng in press]'s 'work situation descriptions and work situation overviews' that were actually used at GB. The Process Continues... An iteration of this scenario is the starting point of the fantasy phase of the future workshop, where the participants (the same four supervisors and two designers) work to imagine solutions to their problems, without too many concerns for the practical implementation of the solutions\textsuperscript{4}. The common artifact example is used to help generate the fantasy. It is presented to the workshop by one of the designers. The work checklist is used by the designers to help consider the consequences of the choices made, resulting in a further scenario exploring primarily positive and negative trend situations in the future changed work, at an overall level. Furthermore, this scenario is explored in a simulation game between two designers and two supervisors\textsuperscript{5}, resulting in some modifications. A possible technical solution, starting out from the hyper media example (Fig. 3, Scenario 2) is used to explore the possibilities of in particular positive and negative trend situations (Fig. 4, Scenario 3 and 4), in a scenario similar to what [Kyng in press] calls 'exploration/requirement scenarios'. Work-oriented and technical checklists are used to explore the consequences. Since they span the positive and negative sides of use of the same design suggestion, we have chosen to summarize the potentials, problems and bottlenecks for both scenarios at once. They formed the basis for, and were developed in iteration with, a cooperative prototyping activity\textsuperscript{6}, where an initial prototype was created by the designers, based on the revised technical scenario. Such scenarios have been used at the GB under the name of 'use scenarios' [Kyng in press]. The actual cooperative prototyping takes place in a setting at the Lindholm site, where two of the supervisors and two designers take part. The supervisors have been asked to bring relevant work material for an inspection task, the data of which exist also in the initial prototype. The scenarios set the stage and are discussed when problems mentioned in the scenarios occur, in the session. Finally, the potentials, problems and bottlenecks are discussed systematically when wrapping up the prototyping session. As the solution in this case is accepted overall, the cooperative prototyping proceeds in three sequential half-day meetings with a more and more detailed and full-blown prototype. The process changes character from one where it is important to focus on the typical work around the prototype to one where it is important to explore and get hands-on experiences with a more and more complete prototype, covering more and more specific, yet peripheral work activities. The scenarios, in this process become increasingly rich, and the process changes to one where evaluation becomes more and more crucial. In the end, the refined technical scenario is used by the group of designers and users when handing over the prototype to those (including the two designers) who are to work on an implementation. (In Kyng's terms explanation scenarios). The two supervisors find it impor- \textsuperscript{4}How future workshops may be used this way is described in [Kensing & Madsen 91, Bødker & Grønbæk 91a and b]. \textsuperscript{5}Similar to simulations described in [Bødker et al. 93a]. \textsuperscript{6} [Bødker & Grønbæk, 91 b]. tant that all supervisors at GB are informed about the discussions of the initial design idea, and the scenarios form a good starting point for making such a presentation. **Scenario 2: A hypermedia solution** This scenario addresses the technical aspects of a hypermedia solution, applying portable PCs. The setting is the Lindholm construction site sometime in the future. Kurt has access to a portable PC. The portables are hooked up to the computer at the site office via a wireless modem connection, through which the supervisors run the hypermedia application. Action: Kurt collects the drawing, he has received from the site office, takes his portable computer and goes out on inspection. He stops at caisson no. 41 and checks the present state of affairs with the production plan and the drawings. Using the light pen on the bar code on the caisson he accesses the computer and gets a list of the latest correspondence on the screen. Clicking once more he sees at a glance that a delay has been accepted, and he starts a conversation with the production engineer about the progress achieved so far. Still on the spot, he enters his notes directly into the master file. This way of working is made possible because the portables are hooked up to the site office. The interface is based on a graphical map of the construction site through which one can access a hypermedia application offering shared access to all kinds of material: drawings, notes, pictures, lab tests, non conformance reports, etc. By clicking on the graphical representation of the construction site, icons like caissons, cranes, and other relevant objects are accessible. These constitute the basic objects of the application. Links have been established to data about the accessible objects. By default, links go from objects to the master file, but it is also possible to make a link from the whole production line of caissons or to establish a link between a production line and a specific office at the site. By clicking three to four times one can zoom in on details of the caisson under current inspection. By a single or double click one can get an overview of the relevant production plan or one can inspect who has recently checked the plans. The process proceeds smoothly: a light pen is attached to the portable PC and, as all objects at the construction site have bar codes, it is easy to make the connection and get the relevant material right away. In addition, the location of other supervisors on inspection is monitored, provided they have a portable computer with them and are on-line. **Potentials, Problems and Bottlenecks - some examples** The physical distribution of data raises concerns about where the hypermedia objects are located, and to what extent they have to be transferred to the actual PC running the application. The graphical representation of the work site will undergo changes as one caisson is finished, installed and replaced at the production line by a new one. The updating of the graphical representation including the initial establishing of links may be done by the supervisors on site, or as part of the work at the site office, where access to the master file may be more smooth and the time pressure less critical. This raises concerns for the updating of the shared database, for which objects are actually held in common by the users, and for the initial cost of setting up the user interface. The graphical representation must represent all relevant objects in an overview and with a resolution that makes them accessible for mouse clicks. *Fig. 3. Scenario 2.* For the sake of the story here, we have assumed that a hypermedia solution is, at some level, a feasible solution to the design problem. Certainly, this does not have to be the case, though in the real case at the Great Belt there has been a lot of interest in hypermedia support for documentation [Grønbæk & Mogensen 94]. As a matter of fact, the whole point of user participation is to find out what needs to be done in the real work situation. The toolbox focuses on the kind of technical solutions that may be built using, and extending the EuroCODE shell. Cases where the EuroCODE platform needs to be abandoned all together is a different story. Fig. 4. Scenarios 3 and 4. THE TOOLBOX - A SYSTEMATIC PRESENTATION The checklists that were applied in the example are presented in fig. 5 and 7, and examples of the detailed description of items in fig. 6 and 8. Descriptions of items start with an introduction, then tell its user what to consider and finally list a couple of specific issues to be checked. Items of the work-oriented checklist are put together as pointers to sociological CSCW literature [Schmidt & Bannon 92, Hughes & King, 92, Heath & Luff 92, Hutchins 90, Robinson 93, Star & Griesemer 89]. Each item of the work-oriented checklist captures a specific part of the work situation and thus opens up a unique perspective on the workplace. These perspectives may be redundant... since some items on the list overlap, e.g. details on the physical workplace may include similar information as details on the spatial distribution of actors and tools. --- The work-oriented checklist consists of fourteen items for capturing crucial aspects of work situations. These items address: 1. the physical setting of the workplace 2. the organizational environment 3. the actors in the work situation 4. the working activities and their relations 5. the materials used and the outcomes produced 6. the communication facilities and tools 7. the standards for procedures and products 8. the security measures and access rights within the organization 9. the sequentiality of procedures and (sub-) products 10. the actors' awareness of each others' activities 11. the actors' overview of current projects and tasks 12. the spatial distribution of actors, materials and tools 13. the degree and kinds of mobility required from the actors 14. the constraints of current working conditions and possibilities for changing them. **Fig 5. The work-oriented checklist.** --- **Materials and outcomes:** In the course of activities, different kinds of materials and information may be required as input. These are processed by the actors and may lead to various intermediate results and/or final products. Consider which different kinds of material, intermediate results and final outcomes are crucial for the work situations you are investigating: - What kind of materials (documents, pictures, drawings etc.) are involved in the production of which kind of products? - Which sub-products or intermediate results are produced? - How are the sub-products or intermediate results used in further production, i.e. what are the crucial "handovers" and transitions, and when and how do they take place? - What are the metaphors currently used to characterize materials and results? **Fig 6. Detailing item 5 on the work-oriented checklist.** Redundancy of that kind is incorporated into the checklist for three reasons: - It helps increase the completeness of work aspects relevant for designing a CSCW solution. For example, if details about an actor's physical location are missing from the description of the physical setting in the first item there is a good chance that they are mentioned in item 12 or even 13. - It helps achieve new insights into particular characteristics of cooperative process since it addresses the same aspects from different points of view. For example, the creation and continuous manipulation of a common artifact may be seen from the perspective of activities and their relations (item 4) or from the perspective of materials, successive sub-products and final outcomes (item 5). Both may have different implications for providing technical support. - It helps detect contradictions, inconsistencies or hidden aspects. For example, actual procedures and actors mentioned in item 3 and 4 may deviate from organizational standards addressed in item 7. Such contradictions indicate where things are still unsettled, unclear or controversial thus may point to potentials for reorganization and optimisation. A similar kind of redundancy is found in the technical checklist, where redundancy may trigger conflicting design ideas thus hinting at the creative space for technical solutions. The technical checklist addresses important features of technical solutions - in particular CSCW solutions - that should be considered in designing systems and applications. The technical checklist consists of twelve items for capturing crucial aspects of technical solutions. These items address: 1. the metaphors to be used for constructing or understanding the technical solution 2. the major, object-oriented concepts 3. the physical location of hard- and software 4. the complexity of the technical solution 5. the sequentiality of activities enforced 6. the extent and forms of functional integration 7. the security measures and access rights provided 8. the CSCW potentials which help users to overcome distances in time and space 9. the potentials for awareness support 10. the extent and kinds of tailor ability 11. the cost benefit distribution with respect to different cooperating users 12. the least effort strategies with respect to implementation, introduction and maintenance. Fig. 7. The technical checklist. Metaphors as springboards: Metaphors designate a comparison between things essentially or generically different but strikingly alike in some pertinent aspects. They can be used to characterize technical facilities, to support their understanding and to generate surprising technical solutions. Therefore, they may serve as springboards for new design ideas [Madsen, 87]. Consider which metaphors are appropriate to characterize and illustrate a technical solution or components of a technical system: - How can the technical solution (or parts of it) be "seen as something else" or can be "described in terms of something else"? - What are the correspondences between the metaphor and the technical solution; where are limits and differences? - In particular, which metaphors can be used for illustrating the user interface, data structures, or particular functionalities? Fig. 8. Detailing item 1 on the technical checklist. Items on the list may provoke multiple or even competing design ideas. For example, when designers discussed the first item on the checklist for the hypermedia component of the EuroCODE environment, they produced a variety of metaphors, such as "a book", "a trail leading through information", "a kaleidoscope", and "glued material" [Bødker et al. 93b]. Obviously, each of these metaphors may trigger different design ideas, e.g. for the user interface. Thus, the technical checklist applied by several members of a design team helps to capture multiple ideas and can serve as input for presentation, discussion and decision making. As prototypical examples we have crystallized a series of work-oriented examples, as well as technical ones to think from, and kick off new ideas. These examples are structured through the use of the checklists. It would take things too far to go through all of the examples systematically here. Nevertheless, we have briefly summarized one of the work-oriented examples (Fig. 9), as well as a technical one (Fig. 10). Cooperation through Communication According to [Robinson 93] a common artifact is an elaboration of the dimensions of communication that take place through, and are supported by a computer application / an artifact. Any specific artifact with the requisite dimensionality is considered a common artifact. A common artifact is an effective tool for getting a job done; it helps people see at a glance what others are doing; it enables actions and changes made by others to be understandable, and appropriate changes to be made; it provides a focus for discussion of difficulties and negotiation of compromises; it offers an overview over the work process that would not otherwise be available. Four dimensions of common artifact are described: • predictability. This covers issues like dependability, functionality (incl. consistency of, compatibility of),and appropriate interface. • peripheral awareness conceived as local and immediate ('at a glance') • double level language: includes conventionalised implicit communication through the artifact ('shared material') and the role of artifact as 'indexical focus' for dialogue. • overview: this is the complement to peripheral awareness. A common artifact can make situations outside the here and now, available within the here and now. Robinson gives examples such as a hotel key rack and a map. [Hutchins 90] has described how a map can function as a common artifact for a group navigating a large ship. In both cases the physical setting in terms of where the artifact is placed is important for who has access to its information when. The common artifact is a tool and may at the same time serve co-ordinating purposes by making people’s activity visible to others: by doing so, the common artifact all in all serves as a mediator of the cooperation that takes place. The mediating functions may be embedded or open to communication depending on how stable the setting, the problem space and the actors' knowledge and skills are. A common artifact does not necessarily enforce sequentiality, but it may offer it: e.g. you have to assume that all keys not in use have been replaced, in order to be correct in interpreting an empty hook to mean that the guest is in. The question of robustness to unanticipated use is important here, due to the high probability that users will depart from any sequence of operations that can be anticipated. In the case of a map, the use is quite open to differences in sequencing. The concept of a common artifact springs from experiences with co-located work settings, yet the access to the artifact may be distributed in time and space. E.g. the users do not have to check in on the hotel key-rack simultaneously. Also those users who only need a glance at the key-rack may do so from a distance, or even via remote connections such as video. In keeping with the above discussion, a pure video connection would probably not be sufficient as the only access to an artifact. Hypermedia A book is a good metaphor for hypermedia. The materials of a hypertext are similar to those of a book: texts, pictures and graphics. Contrary to the book, a hypertext may contain e.g. video and sound. A book facilitates unstructured browsing through the materials, as well as it provides structured means for retrieval of materials. In the book, as opposed to hypertext, these structures cannot be changed once the book has been put together. A kaleidoscope, another useful metaphor, provides new patterns in the material by seeing it in no predetermined, yet systematic order. The kaleidoscope allows the user to find new trails through the material. With the hypertext, as opposed to the kaleidoscope, the trails need to be defined before use. None of these metaphors tell much about how the structure of a given hypertext comes into being. Hypermedia technology (hypertext) has shown promise in supporting groups working on shared materials. Hypermedia technology helps maintaining networks, hypertexts of associations, links or link components, between chunks of materials, nodes or components. Anchors inside materials may be supported, for instance text components may provide anchoring of single characters, words and paragraphs as endpoints for links. Anchors appear in the interface as link markers. Hypermedia clients and database servers may be distributed over several machines in a local area (and/or wide area) network. The database notifications carry information about which machine the connected users are using when modifying hypertexts and components. Hypermedia facilities may also become available for mobile computers. Hypertexts as well as any individual parts of them can be held in common by several users, though components may be temporarily locked during a linking procedure. Designers need to decide the granularity for locking. Hypermedia applications usually only support asynchronous cooperation. The event notification mechanism supports users in maintaining awareness of who is doing what on the shared materials. The hypermedia client provides support for propagating notifications as a combination of messages in a console, bell sounds, small icons appearing in editor or browser windows and as immediate updates. Summarizing the example gives an idea about how the various components of the toolbox may be systematically applied. Fig. 11 looks at the various cooperative design activities in the example, emphasizing which components were used in the activity, what were the outcome, and who took part. The figure illustrates that scenarios take on the typical double role of design artifacts, of being produced in some activities, and instruments of others. Fig. 12 characterizes the use of the toolbox components as design artifacts, in exactly these roles. <table> <thead> <tr> <th>Activity</th> <th>Artifacts used</th> <th>Outcome</th> <th>Participants</th> </tr> </thead> <tbody> <tr> <td>Interviews and field trip</td> <td>Work-oriented checklist</td> <td>Scenario 1</td> <td>designers as producers</td> </tr> <tr> <td>Future workshop, critique</td> <td>Scenario 1, work-oriented checklist</td> <td>Scenario 1</td> <td>users and designers</td> </tr> <tr> <td>Future workshop, fantasy</td> <td>Scenario 1, work-oriented checklist, common artifact example</td> <td>Scenario 3 and 4</td> <td>users and designers</td> </tr> <tr> <td>Technical scenario production</td> <td>Hypermedia example, technical checklist</td> <td>Scenario 2</td> <td>designers as producers</td> </tr> <tr> <td>Simulation game</td> <td>Scenario 3 and 4, scenario 2</td> <td>Scenario 3 and 4</td> <td>users and designers</td> </tr> <tr> <td>Cooperative prototyping</td> <td>Scenario 3 and 4, scenario 2, prototype</td> <td>Scenario 3 and 4</td> <td>users and designers</td> </tr> </tbody> </table> Fig. 11. The example summarized regarding the roles of checklists and scenarios. <table> <thead> <tr> <th>Design artifact</th> <th>Description</th> <th>Produced in or as a result of e.g.</th> <th>Produced by (typically)</th> <th>Used in or provides basis for</th> <th>Used by (primarily)</th> </tr> </thead> <tbody> <tr> <td>w-o checklist</td> <td>raises questions about typically critical aspects regarding work, in particular cooperation and computer support for this activity</td> <td>framework construction based on theory revised based on experience</td> <td>metaactivity + designers</td> <td>analysis of existing work situations, scenario production, exploring scenarios</td> <td>designers</td> </tr> <tr> <td>technical checklist</td> <td>raises questions about typically critical aspects regarding technical implementation, in particular regarding sharing and distribution of technical solutions</td> <td>do + EuroCODE technical possibilities</td> <td>metaactivity + designers</td> <td>investigation of existing computer application or technical design suggestion, exploring scenarios</td> <td>designers</td> </tr> <tr> <td>w-o prototypical example</td> <td>presents examples of work organizational constructs to enhance computer supported cooperative work - potentials and problems</td> <td>do</td> <td>metaactivity + designers</td> <td>idea generation scenario production</td> <td>designers</td> </tr> <tr> <td>technical prototypical example</td> <td>present sexamples of technical constructs to enhance computer supported cooperative work - potentials and problems</td> <td>do</td> <td>metaactivity + designers</td> <td>idea generation scenario production</td> <td>designers</td> </tr> </tbody> </table> Fig. 12. Applying [Kyng in press]'s format for characterizing design artifacts, on the toolbox components. We see these mainly as tools for designers. **Scenario-making** In general, scenario construction is an important method for assessing mid- and long-range developments in technology, economy and society. Scenarios allow for the description of alternatives and may address current situations as well as hypothetical ones in the future. What is characteristic for our way of thinking about scenario construction is that they are constructed in tight loop with their use in groups of designers and users - perhaps users do not always take part in the initial construction of scenarios, but they do in the iteration that takes place in joint design activities such as prototyping sessions or future workshops. Scenarios, as any other design representation, serve the double purpose of engendering the decisions made in the design situation, and of being a medium of communication between the persons involved in the design activity, most noticeable in our context are the cooperating designers and users, but also people from outside the group, e.g. future users or potential purchasers of the computer application and other groups of designers. The term 'scenario' is used with various meanings in the literature. These conceptions, however, share several features that more or less constitute the term: A scenario is hypothetical, i.e. it describes some possible or potential alternative in the present or future. It is selective, i.e. it represents one possible state of complex, interdependent, dynamic and opaque affairs; bound, i.e. it consists of a limited number of states, events, actions and consequences, or subsets of these categories; connected, i.e. its elements are conditionally, temporally or causally related; and assessable, i.e. it can be judged with respect to its probability and/or plausibility. Because of these features, scenarios leave enough freedom for a 'disciplined intuition', but constrain the construction process in a reasonable way: The demand to be hypothetical but possible distinguishes scenarios from mere science fiction stories. Selectivity and boundness imply that a single scenario should not try to capture an extensive and complex domain. If such a domain needs to be handled it should be broken down and addressed by a set of scenarios. This claim has consequences for the size of scenarios, i.e., they should be concise stories instead of long-winded novels. Connectivity suggests that a scenario should form a coherent entity in terms of stories which represent conditional, temporal or causal relationships between crucial states of affairs. Accessibility, at last, ensures that a scenario can be evaluated to find out how it is constrained. Scenarios should be 'stories' located in time and space, 'traces' featuring details, not 'novels'. The scenario should capture the main persons and their activities. This information should allow for conclusions which may serve as answers to the issue of the scenario. Hence, a scenario should consist of at least four parts: an issue, a setting, a story and a conclusion. [Campbell 92] categorizes various types of scenarios dealt with in the HCI literature, based on the assumption that a scenario refers to "representative instances of interaction between user and system". He mentions: - Scenarios for illustration aim to clarify what it is like to use a system. - Scenarios for evaluation take the form of evaluation tasks by specifying step-by-step procedures to be carried out by users in the evaluation of a system. - Scenarios for design or redesign should represent examples of user-system interaction and may contain correct as well as faulty user activities. - Scenarios for testing theories investigate the accuracy or predictive power of a theoretical approach in HCI. Scenarios may also be the basis for illustrating ways of introducing a computer application into an organization, strategies for marketing and education. Many characteristics of scenarios depend on their use. For example, [Carroll et al. 91, Carroll & Rosson 92] introduce the notion of critical and typical situations: Scenarios should be designed based on knowledge about typical ways of doing things, but addressing specific, critical instances of the typical. This distinction is analogous to the one by [Ducot & Lubben 80] who use the terms 'trend' versus 'peripheral' and moreover differentiate between descriptive and normative scenarios. In addition, it is useful to distinguish between scenarios which describe present or future situation and those which focus on positive or negative aspects. When these characteristics are combined they constitute different types of scenarios which are applied differently in the design process (Fig. 13). Peripheral or critical scenarios may include situations that are contradictory to the mainstream. And actually applying the checklists may result in contradictory scenarios or perspectives. This is not a problem, as we see it: Contradictions are thought provoking in the interaction among designers and users and imply that there often are not one best solution, in contrast to the assumptions of many traditional systems development methods. The creative space is exactly where contradictions are confronted. <table> <thead> <tr> <th>Scenario types</th> <th>Description</th> <th>Produced in or as a result of e.g.</th> <th>Produced by (typically)</th> <th>Used in</th> <th>Used by (primarily)</th> </tr> </thead> <tbody> <tr> <td>Present work descriptive scenario</td> <td>summarizes the way current work in the organization is interpreted by the design team, centered around relevant, existing situations, within the users' workplace - duely reminded by the theoretical questions from the work-oriented checklist</td> <td>initial study, work-oriented checklist</td> <td>designers</td> <td>e.g. Future workshops, development of use scenarios and mock-up/prototypes</td> <td>designers and end-users</td> </tr> <tr> <td>Future work scenarios</td> <td>indicate how computer support and (or) changes in work organization in the future may improve upon work situations, and set the stage for simulation at workshops</td> <td>embodying ideas and workshop preparations, technical and work-oriented checklist w-o prototypical examples</td> <td>designers and end-users</td> <td>e.g. simulation games</td> <td>end-users</td> </tr> <tr> <td>detailed positive or negative (scenario 3 and 4)</td> <td>detailed scenarios supplying the use-details needed for discussing whether or not suggested technical capabilities will function in use explain/hypothesize about new possibilities for support with the current prototypes</td> <td>embodying ideas and workshop preparations, technical and work-oriented checklist prototyping</td> <td>designers and end-users</td> <td>e.g. prototyping, mock-ups</td> <td>end-users</td> </tr> <tr> <td>Future technical scenarios (scenario 2)</td> <td>detailed scenarios supplying the use-details needed for discussing whether or not current technical capabilities meet the requirements of the scenario</td> <td>explorations of capabilities of existing software designs technical checklist technical prototypical example</td> <td>designers</td> <td>discussions of capabilities of existing software designs</td> <td>designers and end-users</td> </tr> </tbody> </table> Fig. 13. Scenarios as design artifacts. In [Bødker & Christiansen in press], we discuss, as mentioned in the introduction, how our CSCW toolbox may be thought of from a theoretical point of view. We discuss how we can view CSCW design artifacts as support for bridging people and matters that are physically, timely and conceptually apart; they must be open and negotiable and facilitate dialogue around contradictions. Certainly the notion of, and reasons behind, the toolbox have to be rethought theoretically and practically based on the experience we collect from practical use and revision of the toolbox. In the following we indicate the lines along which we expect to gain input for an iteration. **APPLYING THE TOOLBOX** The toolbox may be applied in many different ways and for a variety of purposes. A few examples are: - to use the work-oriented checklist for constructing questionnaires, guiding observations, and structuring collected data; - to use the technical checklist for presenting design ideas in an informal but well structured way to users, other designers, potential customers etc.; - to use the prototypical examples as a starting point for considering solutions and as an input for discussing their adaption and enhancement; • to use the different kinds of scenarios for focusing on specific aspects of a present work situation or for describing the potential impact of a CSCW solution on a working environment. So far, we have deliberately avoided to specify procedures of how to use the toolbox, and the framework does not propose or constrain how to combine its components with other techniques. Our example above gives some indications, however, of the type of process we are after: 1. A series of scenarios constitute the back-bone of the design project. 2. These scenarios help span out a theory-oriented exploration of design situations, while keeping the grounding in the specific empirical setting. 3. Numerous activities take place around the design-activity, ranging from initial interviewing and observation to programming and testing. 4. Some of these activities, as well as the actual shaping of the scenarios, are cooperative activities among users and designers, others are primarily done by professional designers. 5. Since the scenarios as such do not embody the technology, thus not being available for hands-on experiences by users, scenario making will primarily form the basis for other activities in which they are embodied and explored. They may be setting the stage for and pointing at problems and solutions to be dealt with in cooperative prototyping [Bødker & Grønbæk 91a and b], when using mock-ups [Ehn & Kyng 91], simulations, or in more systematic explorations of a running computer application for evaluation or education purposes. 6. Furthermore, the scenarios can be designed and explored in other types of design-by-doing situa- tions such as future workshops [Kensing & Madsen 91], organizational games [Ehn & Sjögren 91], dilemma games [Mogensen 94]. As mentioned, our experiences with scenarios as useful artifacts in cooperative design situations date back to the EuroCoop project [Kyng in press] and even earlier [Bødker 91, Jungermann & Thüring 87]. However, practical experiences with the use of the toolbox are still limited. A first version of the toolbox has been introduced to the EuroCODE community in a workshop and used by three groups of designers in this context. Based on feedback from this, the checklists were revised, and the internal structure for scenarios was described in more detail. The toolbox will be continuously improved and revised in the further course of the EuroCODE project, in iteration with the development of the EuroCODE shell and demonstrators, thus hopefully supporting the creative elements in EuroCODE itself. ACKNOWLEDGEMENTS The work has been funded in part by Esprit project 6155 - EuroCODE. The EuroCODE T1.1 group (Kaj Grønbæk, Morten Kyng, Kim Halskov Madsen, Preben Mogensen, Peter Axel Nielsen and Mike Robinson of Aarhus University; Heike Kühn and Simon Robinson of empirica; Elke Hinrichs and Thomas Kreifelds of GMD; Pål Sørgaard, NR; Pippa Hennesy, Nexor; Lone Faber, Daniele Pagini and Wendy Mackay of RANK Xerox EuroPARC) helped develop the toolbox. Our presenta- tion owes much to Morten Kyng. Olav Bertelsen, Tom Moran and several anonymous reviewers provided useful comments on earlier drafts. We thank Susanne Brøndberg for improving our English, and Olivier Danvy for the French translation of the abstract. REFERENCES
{"Source-Url": "http://pure.au.dk/portal/files/20692420/Full_text", "len_cl100k_base": 11312, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 45111, "total-output-tokens": 13974, "length": "2e13", "weborganizer": {"__label__adult": 0.0006785392761230469, "__label__art_design": 0.029541015625, "__label__crime_law": 0.0006785392761230469, "__label__education_jobs": 0.041748046875, "__label__entertainment": 0.0003783702850341797, "__label__fashion_beauty": 0.0005326271057128906, "__label__finance_business": 0.0019626617431640625, "__label__food_dining": 0.0007014274597167969, "__label__games": 0.0015077590942382812, "__label__hardware": 0.00396728515625, "__label__health": 0.0009026527404785156, "__label__history": 0.0016832351684570312, "__label__home_hobbies": 0.0005335807800292969, "__label__industrial": 0.002651214599609375, "__label__literature": 0.0017948150634765625, "__label__politics": 0.0004916191101074219, "__label__religion": 0.0011749267578125, "__label__science_tech": 0.406982421875, "__label__social_life": 0.0003342628479003906, "__label__software": 0.042144775390625, "__label__software_dev": 0.45751953125, "__label__sports_fitness": 0.0003256797790527344, "__label__transportation": 0.0014123916625976562, "__label__travel": 0.0003769397735595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61128, 0.0227]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61128, 0.38968]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61128, 0.88293]], "google_gemma-3-12b-it_contains_pii": [[0, 134, false], [134, 2967, null], [2967, 6283, null], [6283, 9568, null], [9568, 13119, null], [13119, 15302, null], [15302, 19246, null], [19246, 22858, null], [22858, 27105, null], [27105, 27855, null], [27855, 31193, null], [31193, 34193, null], [34193, 39434, null], [39434, 44044, null], [44044, 47520, null], [47520, 51603, null], [51603, 54887, null], [54887, 57799, null], [57799, 60896, null], [60896, 61128, null]], "google_gemma-3-12b-it_is_public_document": [[0, 134, true], [134, 2967, null], [2967, 6283, null], [6283, 9568, null], [9568, 13119, null], [13119, 15302, null], [15302, 19246, null], [19246, 22858, null], [22858, 27105, null], [27105, 27855, null], [27855, 31193, null], [31193, 34193, null], [34193, 39434, null], [39434, 44044, null], [44044, 47520, null], [47520, 51603, null], [51603, 54887, null], [54887, 57799, null], [57799, 60896, null], [60896, 61128, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61128, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61128, null]], "pdf_page_numbers": [[0, 134, 1], [134, 2967, 2], [2967, 6283, 3], [6283, 9568, 4], [9568, 13119, 5], [13119, 15302, 6], [15302, 19246, 7], [19246, 22858, 8], [22858, 27105, 9], [27105, 27855, 10], [27855, 31193, 11], [31193, 34193, 12], [34193, 39434, 13], [39434, 44044, 14], [44044, 47520, 15], [47520, 51603, 16], [51603, 54887, 17], [54887, 57799, 18], [57799, 60896, 19], [60896, 61128, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61128, 0.06135]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
2c780088e82c69af7cc611b9dc4bdbd9484d1041
Module 10 Data Processing and Text Internet Users in the world: 3,587,520,067 Total number of Websites: 1,162,381,089 Emails sent today: 82,167,078,187 Google searches today: 1,819,868,567 Blog posts written today: 1,699,641 Tweets sent today: 231,740,961 Videos viewed today on YouTube: 2,104,290,745 Photos uploaded today on Instagram: 23,682,137 Tumblr posts today: 37,451,716 TRAFFIC FROM MOBILE & ONLINE MESSAGING TO REACH 438 BILLION PER DAY BY 2019 Hampshire, UK: 6th July 2015: New data from Juniper Research has shown that mobile and online messaging traffic will reach 160 trillion per annum by 2019, up from 94.2 trillion this year – equating to approximately 438 billion messages sent and received by users on a daily basis by 2019. These figures incorporate SMS, MMS, IM (Instant Messaging), Social Media and Email. Last year, email accounted for the largest share of traffic, at around 35 trillion messages per year – although almost 80% of this figure (28 trillion) can be categorised as spam. However, within the next 12 months IM will overtake email generating almost 43 trillion messages annually. <table> <thead> <tr> <th><strong>YouTube Company Statistics</strong></th> <th><strong>Data</strong></th> </tr> </thead> <tbody> <tr> <td>Total number of people who use YouTube</td> <td>1,325,000,000</td> </tr> <tr> <td>Hours of video uploaded to YouTube every minute</td> <td>300 hours</td> </tr> <tr> <td>Number of videos viewed on YouTube everyday</td> <td>4,950,000,000</td> </tr> <tr> <td>Number of unique visits to YouTube every month</td> <td>900,000,000</td> </tr> <tr> <td>Total number of hours of video watched on YouTube each month</td> <td>3.25 billion hours</td> </tr> <tr> <td>Number of YouTube videos that have generated over 1 billion views</td> <td>10,113</td> </tr> <tr> <td>Percent of YouTube visitors that come from outside the U.S.</td> <td>70 %</td> </tr> <tr> <td>Number of countries with localized versions of YouTube</td> <td>42</td> </tr> <tr> <td>Total number of languages Youtube is broadcast in</td> <td>54</td> </tr> <tr> <td>User submitted video with the most views – “Charlie bit my finger”</td> <td>829,000,000</td> </tr> <tr> <td>Average number of mobile YouTube video views per day</td> <td>1,000,000,000</td> </tr> <tr> <td>Average time spent on YouTube per mobile session</td> <td>40 minutes</td> </tr> <tr> <td>Average YouTube partner channel payout per 5,000 views</td> <td>$0.32</td> </tr> <tr> <td><strong>YouTube Company Statistics</strong></td> <td><strong>Data</strong></td> </tr> <tr> <td>----------------------------------------------------------------------</td> <td>------------------------</td> </tr> <tr> <td>Total number of people who use YouTube</td> <td>1,325,000,000</td> </tr> <tr> <td>Hours of video uploaded to YouTube every minute</td> <td>300 hours</td> </tr> <tr> <td>Number of videos viewed on YouTube everyday</td> <td>4,950,000,000</td> </tr> <tr> <td>Number of unique visits to YouTube every month</td> <td>900,000,000</td> </tr> <tr> <td>Total number of hours of video watched on YouTube each month</td> <td>3.25 billion hours</td> </tr> <tr> <td>Number of YouTube videos that have generated over 1 billion views</td> <td>10,113</td> </tr> <tr> <td>Percent of YouTube visitors that come from outside the U.S.</td> <td>70 %</td> </tr> <tr> <td>Number of countries with localized versions of YouTube</td> <td>42</td> </tr> <tr> <td>Total number of languages Youtube is broadcast in</td> <td>54</td> </tr> <tr> <td>User submitted video with the most views - “Charlie bit my finger”</td> <td>829,000,000</td> </tr> <tr> <td>Average number of mobile YouTube video views per day</td> <td>1,000,000,000</td> </tr> <tr> <td>Average time spent on YouTube per mobile session</td> <td>40 minutes</td> </tr> <tr> <td>Average YouTube partner channel payout per 5,000 views</td> <td>$0.32</td> </tr> </tbody> </table> Netflix and YouTube Are America's Biggest Traffic Hogs Share of peak period downstream traffic in North America, by application* Netflix: 31.62% YouTube: 18.62% HTTP: 9.74% BitTorrent: 4.05% iTunes: 3.27% Other MPEG: 2.60% SSL: 2.05% Amazon Prime Video: 1.61% Facebook: 1.31% Hulu: 1.29% Other: 23.77% * September 2013. Fixed access only. Data challenges • Creating it • Storing it • Moving it around • Keeping it private Data challenges • Creating it • Storing it • Moving it around • Keeping it private • Making sense of it Searching, indexing Collecting, Correlating, Recommending Collecting, Correlating, Recommending Online Advertising - Convert Shoppers with Relevant Ads www.criteo.com/Mobile Make More Sales with Criteo Today. 10,000 brands · 130 countries Services: Transparent CPC Pricing, Unparalleled Technology, Dynamic Creative, Cross ... Contact Us Tell Us A Little About Yourself. Find a Local Office Near You. What We Do Driving Better Marketing Results. Learn About Our Technology Today Advertising Online - Marketing360.com www.marketing360.com/OnlineAdvertising +1 855-773-8169 #1 Marketing Platform® For Advertising Online. Free Demo! Online Advertising - Reach More Customers Online www.rogersoutrank.com/Client_Leads Demo OutRank's Powerful Platform. Online advertising - Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Online_advertising Online advertising, also called online marketing or Internet advertising or web advertising, is a form of marketing and advertising which uses the Internet to ... Display advertising - Web banner - Mobile advertising - Paywall Rose repeats to Hammerbacher—who’s a founder of data analytics company Cloudera—a line from an interview he gave Businessweek back when he was an early employee hustling stats for Harvard bud Zuckerberg at Facebook: “The best minds of my generation are thinking about how to make people click ads.” And Rose, in his politeness, left off the last part of the line: “That sucks.” # Patterns, trends, predictions ## Quarantine <table> <thead> <tr> <th>My Folders</th> </tr> </thead> <tbody> <tr> <td>Low Priority Mail -</td> </tr> <tr> <td>Quarantined (0)</td> </tr> <tr> <td>Low Priority Mail - Delivered</td> </tr> <tr> <td>Spam - Quarantined (60)</td> </tr> </tbody> </table> ## Spam - Quarantined <table> <thead> <tr> <th>Score</th> <th>From</th> <th>Subject</th> </tr> </thead> <tbody> <tr> <td>99</td> <td><a href="mailto:vanzari@aramisfeeling.ro">vanzari@aramisfeeling.ro</a></td> <td>Cadoul tau e la un cl</td> </tr> <tr> <td>99</td> <td><a href="mailto:Aden@phwwemoty.org">Aden@phwwemoty.org</a></td> <td><em><strong><strong>SPAM</strong></strong></em>* To l</td> </tr> <tr> <td>99</td> <td><a href="mailto:rzljqzoaeqywpx@cgl.uwaterloo.ca">rzljqzoaeqywpx@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:fymvsmqxcof@cgl.uwaterloo.ca">fymvsmqxcof@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:mwhnskic@cgl.uwaterloo.ca">mwhnskic@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:amy-manges@goodsnfits.com">amy-manges@goodsnfits.com</a></td> <td><em><strong><strong>SPAM</strong></strong></em>* See</td> </tr> <tr> <td>99</td> <td><a href="mailto:vcxtuprei@cgl.uwaterloo.ca">vcxtuprei@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:syxlnxsg@cgl.uwaterloo.ca">syxlnxsg@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:vlvorxrcdk@cgl.uwaterloo.ca">vlvorxrcdk@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:rjtirt@cgl.uwaterloo.ca">rjtirt@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:sjwsyf@cgl.uwaterloo.ca">sjwsyf@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:nxrvghoxqbewg@cgl.uwaterloo.ca">nxrvghoxqbewg@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:sdsjocuiv@cgl.uwaterloo.ca">sdsjocuiv@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:laryfhakexa@cgl.uwaterloo.ca">laryfhakexa@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:wcjhpjopvxk@cgl.uwaterloo.ca">wcjhpjopvxk@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:mmprig@cgl.uwaterloo.ca">mmprig@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> <tr> <td>99</td> <td><a href="mailto:wtlby@cgl.uwaterloo.ca">wtlby@cgl.uwaterloo.ca</a></td> <td><em><strong><strong>SPAM</strong></strong></em>*</td> </tr> </tbody> </table> Welcome To MalaysianCupid.com! Dear Member Congratulations on joining MalaysianCupid.com! This email contains your logon information and important information that will help you get the most out of your membership and help you to find your perfect partner. YOUR LOGIN DETAILS: Login Email: azriel@gmail.com Your Password is: ayang78 Please save this email for future reference! Thanks for your payment, processed on June 02, 2015. Hello, AZRIEL TEPPER American Express Card 81009 We processed your scheduled payment. $987.91 PROCESSED ON June 02, 2015 It's processed today - but give us 24 - 36 hours for your payment to appear online. View your account. Thanks for your Card Membership, American Express Customer Care Was this e-mail helpful? Give us your feedback STAY CONNECTED @AskAmex Amex Customer Care, at your service. Community_@Amex Your questions. Your interests. Your community. Amex Mobile available on the App Store℠ or Google Play™ 1700 Years of Global Temperature Change from Proxy Data Temperature Change (°F) - Uncertainty - Proxy-based records - Thermometer-based records Year 300 500 700 900 1100 1300 1500 1700 1900 Medieval Warm Period Little Ice Age The Data That Turned the World Upside Down HANNES GRASSEGGER AND MIKAEL KROGERUS Jan 28 2017, 9:15am Psychologist Michal Kosinski developed a method to analyze people in minute detail based on their Facebook activity. Did a similar tool help propel Donald Trump to victory? Two reporters from Zurich-based Das Magazin went data-gathering. An earlier version of this story appeared in Das Magazin in December. Vault 7: CIA Hacking Tools Revealed Contents - Press Release - Analysis - Examples The shape of data How is your information organized? How do the parts relate to each other? These questions profoundly affect the tools you use and the code you write. Call me Ishmael. Some years ago—never mind how long precisely—having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me, that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people's hats off—then, I account it high time to get to sea as soon as I can. This is my substitute for pistol and ball. With a philosophical flourish Cato throws himself upon his sword; I quietly take to the ship. There is nothing surprising in this. If they but knew it, almost all men in their degree, some time TRUMP: Chief Justice Roberts, President Carter, President Clinton, President Bush, President Obama, fellow Americans and people of the world, thank you. (APPLAUSE) We, the citizens of America, are now joined in a great national effort to rebuild our country and restore its promise for all of our people. (APPLAUSE) Together, we will determine the course of America and the world for many, many years to come. We will face challenges, we will confront hardships, but we will get the job done. Every four years, we gather on these steps to carry out the orderly and peaceful transfer of power, and we are grateful to President Obama and First Lady Michelle Obama for their gracious aid throughout this transition. They have been magnificent. Thank you. (APPLAUSE) McCarthy Faulkner medium.com/@neuroecology/punctuation-in-novels-8f316d542ec4 Hi Craig, --- Rishabh Moudgil Internet Message Format Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards" (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2001). All Rights Reserved. Abstract This standard specifies a syntax for text messages that are sent between computer users, within the framework of "electronic mail" messages. This standard supersedes the one specified in Request For Comments (RFC) 822, "Standard for the Format of ARPA Internet Text Messages", updating it to reflect current practice and incorporating incremental changes that were specified in other RFCs. Sequence 46.12 47.88 46.32 45.27 44.32 43.87 44.23 42.95 41.74 40.69 41.68 40.73 40.75 40.55 39.39 39.27 40.89 41.22 40.57 40.43 40.58 39.93 41.08 40.00 37.64 37.46 37.16 36.76 35.65 36.31 37.32 35.55 34.98 34.72 34.55 36.12 36.76 37.62 36.36 37.88 36.59 37.13 The Right Honourable Justin Trudeau The Right Honourable Stephen Harper The Right Honourable Paul Edgar Philippe Martin The Right Honourable Joseph Jacques Jean Chrétien The Right Honourable A. Kim Campbell The Right Honourable Martin Brian Mulroney The Right Honourable John Napier Turner The Right Honourable Pierre Elliott Trudeau The Right Honourable Charles Joseph Clark The Right Honourable Pierre Elliott Trudeau The Right Honourable Lester Bowles Pearson The Right Honourable John George Diefenbaker The Right Honourable Louis Stephen St-Laurent The Right Honourable William Lyon Mackenzie King The Right Honourable Richard Bedford Bennett The Right Honourable William Lyon Mackenzie King The Right Honourable Arthur Meighen The Right Honourable William Lyon Mackenzie King ## Dictionary Associate a set of *keys* with a set of *values*. Ask for the value associated with any key without examining every other key/value pair. <table> <thead> <tr> <th>Year</th> <th>City, Country</th> <th>Year</th> <th>City, Country</th> </tr> </thead> <tbody> <tr> <td>1896</td> <td>Athens, Greece</td> <td>1968</td> <td>Mexico City, Mexico</td> </tr> <tr> <td>1900</td> <td>Paris, France</td> <td>1972</td> <td>Munich, West Germany</td> </tr> <tr> <td>1904</td> <td>St. Louis, United States</td> <td>1976</td> <td>Montréal, Canada</td> </tr> <tr> <td>1908</td> <td>London, United Kingdom</td> <td>1980</td> <td>Moscow, Soviet Union</td> </tr> <tr> <td>1912</td> <td>Stockholm, Sweden</td> <td>1984</td> <td>Los Angeles, United States</td> </tr> <tr> <td>1920</td> <td>Antwerp, Belgium</td> <td>1988</td> <td>Seoul, South Korea</td> </tr> <tr> <td>1924</td> <td>Paris, France</td> <td>1992</td> <td>Barcelona, Spain</td> </tr> <tr> <td>1928</td> <td>Amsterdam, Netherlands</td> <td>1996</td> <td>Atlanta, United States</td> </tr> <tr> <td>1932</td> <td>Los Angeles, United States</td> <td>2000</td> <td>Sydney, Australia</td> </tr> <tr> <td>1936</td> <td>Berlin, Germany</td> <td>2004</td> <td>Athens, Greece</td> </tr> <tr> <td>1948</td> <td>London, United Kingdom</td> <td>2008</td> <td>Beijing, China</td> </tr> <tr> <td>1952</td> <td>Helsinki, Finland</td> <td>2012</td> <td>London, United Kingdom</td> </tr> <tr> <td>1956</td> <td>Melbourne, Australia</td> <td>2016</td> <td>Rio de Janeiro, Brazil</td> </tr> <tr> <td>1960</td> <td>Rome, Italy</td> <td>2020</td> <td>Tokyo, Japan</td> </tr> <tr> <td>1964</td> <td>Tokyo, Japan</td> <td></td> <td></td> </tr> <tr> <td>SONG</td> <td>ARTIST</td> <td>ALBUM</td> <td>Updated</td> </tr> <tr> <td>-------------------------------------</td> <td>-------------------</td> <td>---------------------</td> <td>------------------</td> </tr> <tr> <td>Ways To Go - Margot Mix</td> <td>Weval, Margot</td> <td>Weval Remix</td> <td>11 hours ago</td> </tr> <tr> <td>Death Is A Girl</td> <td>Mini Mansions</td> <td>The Great Pretender</td> <td>11 hours ago</td> </tr> <tr> <td>Jumbo</td> <td>Underworld</td> <td>Beaucoup Fish</td> <td>11 hours ago</td> </tr> <tr> <td>Bug Powder Dust</td> <td>The Mysterons</td> <td>Meandering</td> <td>11 hours ago</td> </tr> <tr> <td>...To Have No Answer</td> <td>Flock of Dimes</td> <td>If You See Me, Sa...</td> <td>11 hours ago</td> </tr> <tr> <td>I'll Cut You Down</td> <td>Uncle Acid &amp; The...</td> <td>Blood Lust</td> <td>11 hours ago</td> </tr> <tr> <td>L'enfer ce n'est pas les autres c'est moi</td> <td>The Eye Of Time</td> <td>Myth I: A Last Da...</td> <td>11 hours ago</td> </tr> <tr> <td>Terrain</td> <td>pg.lost</td> <td>Key</td> <td>11 hours ago</td> </tr> </tbody> </table> String operations String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if( str1.equals( str2 ) ) { ... } String[] words = splitTokens( str1 ); String operations String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if( str1.equals( str2 ) ) { ... } String[] words = splitTokens( str1 ); Initialize a variable from a string literal String operations ```java String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if( str1.equals( str2 ) ) { ... } String[] words = splitTokens( str1 ); ``` Count the number of characters in a string String operations String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if( str1.equals( str2 ) ) { ... } String[] words = splitTokens( str1 ); Extract a character from a string. Like accessing an array. String operations String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if (str1.equals(str2)) { ... } String[] words = splitTokens(str1); String operations String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if( str1.equals( str2 ) ) { ... } String[] words = splitTokens( str1 ); String operations String wd = "..."; int len = wd.length(); char c = wd.charAt(2); String str3 = str1 + str2; if( str1.equals( str2 ) ) { ... } String[] words = splitTokens( str1 ); Break a string into words by looking for whitespace Messier text String[] splitTokens(String text, String delims) { ... } Break the long string `text` into “words”, where the characters in `delims` (and not whitespace) are treated as breakpoints. String trim(String text) { ... } Return a copy of `text` with any excess whitespace removed from the start and end. Example: the Region of Waterloo’s list of reserved street names <table> <thead> <tr> <th>FullStreetName</th> <th>Municipality</th> </tr> </thead> <tbody> <tr> <td>Abbey Glen</td> <td>Kitchener</td> </tr> <tr> <td>Aberle</td> <td>Woolwich</td> </tr> <tr> <td>Abeth</td> <td>Kitchener</td> </tr> <tr> <td>Abitibi</td> <td>Cambridge</td> </tr> <tr> <td>Able</td> <td>Cambridge</td> </tr> <tr> <td>Abram Clemens St</td> <td>Kitchener</td> </tr> <tr> <td>Accobee</td> <td>Cambridge</td> </tr> <tr> <td>Adair</td> <td>Cambridge</td> </tr> <tr> <td>Adcock</td> <td>Region of Waterloo</td> </tr> <tr> <td>Addlev</td> <td>Cambridge</td> </tr> </tbody> </table> Reading the dictionary A a aa aal aalii aam Aani aardvark aardwolf Aaron Aaronic Aaronical Aaronite Aaronitic Aaru Ab aba Ababdeh Ababua abac abaca abacate abacay abacinate abacination abaciscus Reading the dictionary Find the longest word Find all words with three or more Ys Find all words ending with MT Find all words starting with TM Find all words ending with DOUS Find all words containing UFA Find all words ending in GRY Find all palindromes Find words with three consecutive double letters Find the longest word whose letters are in alphabetical order Find the longest word with no Finding things in strings ```java if( str.contains( "abc" ) ) { ... } Check if the string `str` has the substring “abc” anywhere inside of it. ```java if( str.startsWith( "def" ) ) { ... } if( str.endsWith( "ghi" ) ) { ... } ``` Look for a substring specifically at the start or end of a string. Writing a spellchecker With the dictionary at our disposal, it’s easy to check if a given string is a word. ```java String[] dict; void setup() { dict = loadStrings( "words.txt" ); } boolean isWord( String word ) { } ``` Writing a spellchecker With the dictionary at our disposal, it’s easy to check if a given string is a word. ```java String[] dict; void setup() { dict = loadStrings( "words.txt" ); } boolean isWord( String word ) { for ( int idx = 0; idx < dict.length; ++idx ) { if ( dict[idx].equals( word ) ) { return true; } } return false; } ``` boolean isWord( String word ) { for ( int idx = 0; idx < dict.length; ++idx ) { if ( dict[idx].equals( word ) ) { return true; } } return false; } Looping over every word works, but it’s painfully slow, especially when the word isn’t there! Dictionaries In programming, a *dictionary* is a mapping from a set of *keys* to a set of *values*. Any given key may have at most one associated value. - Year → Olympic host city - Name → Phone number - Student ID number → Exam seating code - Clicker ID → Student ID number - Server name → IP address Dictionaries Dictionary operations we might care about: • Look up the value associated with a given key • Ask if the dictionary has a given key • Add a new key to the dictionary, with its associated value • Remove a key and its value from the dictionary Processing includes a few handy dictionary classes, where the keys are Strings: - IntDict: map Strings to ints - FloatDict: map Strings to floats - StringDict: map Strings to Strings Processing includes a few handy dictionary classes, where the keys are Strings: - **IntDict**: map Strings to ints - **FloatDict**: map Strings to floats - **StringDict**: map Strings to Strings Java allows more-or-less arbitrary mappings between keys and values: - `java.util.TreeMap<K,V>`: map any key type `K` to any value type `V`. **IntDict** myDict = new IntDict(); Create a new, empty dictionary IntDict myDict = new IntDict(); Create a new, empty dictionary myDict.set("Kumquat", 13); myDict.set("Durian", 19); Add a new key to the dictionary, with its associated value ```java IntDict myDict = new IntDict; Create a new, empty dictionary myDict.set( "Kumquat", 13 ); myDict.set( "Durian", 19 ); Add a new key to the dictionary, with its associated value println( myDict.get( "Kumquat" ) ); Look up the value associated with a given key ``` IntDict myDict = new IntDict(); Create a new, empty dictionary myDict.set( "Kumquat", 13 ); myDict.set( "Durian", 19 ); Add a new key to the dictionary, with its associated value println( myDict.get( "Kumquat" ) ); Look up the value associated with a given key if( myDict.hasKey( "Rambutan" ) ) { ... } Ask if the dictionary has a given key IntDict myDict = new IntDict(); Create a new, empty dictionary myDict.set( "Kumquat", 13 ); myDict.set( "Durian", 19 ); Add a new key to the dictionary, with its associated value println( myDict.get( "Kumquat" ) ); Look up the value associated with a given key if( myDict.hasKey( "Rambutan" ) ) { ... } Ask if the dictionary has a given key myDict.remove( "Durian" ); Remove a key and its value from the dictionary Writing a spellchecker String[] dict; void setup() { dict = loadStrings( "words.txt" ); } boolean isWord( String word ) { for ( int idx = 0; idx < dict.length; ++idx ) { if ( dict[idx].equals( word ) ) { return true; } } return false; } Writing a spellchecker ```java IntDict myDict; void setup() { String[] words = loadStrings("words.txt"); for( int idx = 0; idx < words.length; ++idx ) { myDict.set( words[idx], 1 ); } } boolean isWord( String word ) { return myDict.hasKey( word ); } ``` Writing a spellchecker ```java IntDict myDict; void setup() { String[] words = loadStrings("words"); for( int idx = 0; idx < words.length; ++idx ) { myDict.set( words[idx], 1 ); } } boolean isWord( String word ) { return myDict.hasKey( word ); } ``` These are guaranteed to be fast! # Counting things <table> <thead> <tr> <th>Absalom, Absalom!</th> <th>A Farewell To Arms</th> <th>Alice in Wonderland</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>Blood Meridian</td> <td>Frankenstein</td> <td>Great Expectations</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>Huckleberry Finn</td> <td>Pride and Prejudice</td> <td>Ulysses</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> Finding patterns It’s easy to search a string for a given phone number: ```java if( myString.contains( "(519) 888-4567" ) ) { ... } ``` But what if we wanted to find all the phone numbers in a string? Finding patterns Regular Expressions are a general tool for finding patterns in strings. Finding patterns Regular Expressions are a programming language for finding patterns in strings. Finding patterns Regular Expressions are a cryptic programming language for finding patterns in strings. xkcd.com/208/ Look for an instance of the regular expression pattern inside of the string text. If the answer is not null, the pattern was found. ## Regular Expressions - Quick Reference Guide ### Anchors - `^` start of line - `$` end of line - `\b` word boundary - `\B` not at word boundary - `\A` start of subject - `\G` first match in subject - `\z` end of subject - `\Z` end of subject - `\Q` or before newline at end ### Non-printing characters - `\a` alarm (BEL, hex 07) - `\e` escape (hex 1B) - `\f` formfeed (hex 0C) - `\n` newline (hex 0A) - `\r` carriage return (hex OD) - `\t` tab (hex 09) - `\v` vertical tab (hex 0B) - `\xhh` hex code hh - `\hh` octal code ddd - `{\hh\.}{\hh\.}` generic character types - `\d` decimal digit - `\D` not a decimal digit - `\s` whitespace character - `\S` not a whitespace character - `\w` "word" character - `\W` "non-word" character ### POSIX character classes - `alnum` letters and digits - `alpha` letters - `ascii` character codes 0-127 - `blank` space or tab only - `cntrl` control characters - `digit` decimal digits - `graph` printing chars - space - `lower` lower case letters - `print` printing chars + space - `punct` printing chars -alnum - `space` white space - `upper` upper case letters - `word` "word" characters - `xdigit` hexadecimal digits ### Literal Characters - Letters and digits match exactly `a x B 7 0` - Some special characters match exactly `@ - = %` - Escape other specials with backslash `\ \ \ \ ![` ### Character Groups - Almost any character (usually not newline). `.` - Lists and ranges of characters `[ ]` - Any character except those listed `[^ ]` ### Counts (add ? for non-greedy) - 0 or more ("perhaps some") `*` - 0 or 1 ("perhaps a") `?` - 1 or more ("some") `+` - Between "n" and "m" of `{n,m} - Exactly "n", "n" or more `{n}, {n,} ### Alternation - Either/or `|` ### Lookahead and Lookbehind - Followed by `(?=.* )` - NOT followed by `(?<! )` - Following `(?<=.*)` - NOT following `(?<! )` ### Grouping - For capture and counts `( )` - Non-capturing `(?:)` - Named captures `(?<name> )` ### Back references - Numbered ` - Relative `g{-n}` - Named `\k<name>` ### Character group contents - Individual chars `x` - Character range `x-y` - Posix char class `[^class:]` - Negated class `[^<:class:] ### Examples - `[a-zA-Z0-9_] - `[[[:alnum:]] ### Comments - `(?#comment` ### Conditional subpatterns - `(?(condition)yes-pattern)` - `(?(condition)yes!no-pattern)` ### Recursive patterns - `(?n)` Numbered - `(?0) (?R)` Entire regex - `(?&name)` Named ### Replacements - `$n` reference capture ### Case foldings - `\u` upper case next char - `\U` upper case following - `\l` lower case next char - `\L` lower case following - `\E` end case folding ### Conditional insertions - `(?n:insertion)` - `(?n:insertion:otherwise)` [http://www.e-texteditor.com](http://www.e-texteditor.com) Substring “ufa” anywhere in a word: ufa Word ending in “mt”: mt$ Word with three or more “y”s, on a line by itself: y.*y.*y An integer: ^(-?[1-9]+\d*)$|^0$ An email address: \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,}\b A URL: ^(https?:\/\/)?([da-z\.-]+\.[a-z\.-]{2,6})([\w\.-]*)*$
{"Source-Url": "https://student.cs.uwaterloo.ca/~cs106/Winter2017/lectures/10%20Text%20Processing%20Handout.pdf", "len_cl100k_base": 8686, "olmocr-version": "0.1.53", "pdf-total-pages": 72, "total-fallback-pages": 0, "total-input-tokens": 92931, "total-output-tokens": 10419, "length": "2e13", "weborganizer": {"__label__adult": 0.0007734298706054688, "__label__art_design": 0.0004496574401855469, "__label__crime_law": 0.0007681846618652344, "__label__education_jobs": 0.01120758056640625, "__label__entertainment": 0.00200653076171875, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.00399017333984375, "__label__food_dining": 0.0005674362182617188, "__label__games": 0.0026149749755859375, "__label__hardware": 0.001605987548828125, "__label__health": 0.0005908012390136719, "__label__history": 0.000499725341796875, "__label__home_hobbies": 0.0002256631851196289, "__label__industrial": 0.0002510547637939453, "__label__literature": 0.003387451171875, "__label__politics": 0.0008969306945800781, "__label__religion": 0.000579833984375, "__label__science_tech": 0.0462646484375, "__label__social_life": 0.0015935897827148438, "__label__software": 0.256103515625, "__label__software_dev": 0.66455078125, "__label__sports_fitness": 0.00040984153747558594, "__label__transportation": 0.00038242340087890625, "__label__travel": 0.00030303001403808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28503, 0.03297]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28503, 0.14838]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28503, 0.71672]], "google_gemma-3-12b-it_contains_pii": [[0, 35, false], [35, 382, null], [382, 382, null], [382, 1120, null], [1120, 2524, null], [2524, 3982, null], [3982, 4324, null], [4324, 4324, null], [4324, 4408, null], [4408, 4514, null], [4514, 4534, null], [4534, 4572, null], [4572, 4610, null], [4610, 5598, null], [5598, 5978, null], [5978, 8147, null], [8147, 8147, null], [8147, 8529, null], [8529, 9111, null], [9111, 9342, null], [9342, 9755, null], [9755, 9840, null], [9840, 10010, null], [10010, 10010, null], [10010, 11043, null], [11043, 11811, null], [11811, 11891, null], [11891, 11891, null], [11891, 11923, null], [11923, 12789, null], [12789, 13834, null], [13834, 15474, null], [15474, 16604, null], [16604, 16604, null], [16604, 16604, null], [16604, 16604, null], [16604, 16604, null], [16604, 16787, null], [16787, 17015, null], [17015, 17254, null], [17254, 17498, null], [17498, 17681, null], [17681, 17864, null], [17864, 18100, null], [18100, 18419, null], [18419, 19306, null], [19306, 19502, null], [19502, 19900, null], [19900, 20200, null], [20200, 20429, null], [20429, 20811, null], [20811, 21093, null], [21093, 21397, null], [21397, 21653, null], [21653, 21837, null], [21837, 22177, null], [22177, 22245, null], [22245, 22423, null], [22423, 22699, null], [22699, 23047, null], [23047, 23486, null], [23486, 23770, null], [23770, 24051, null], [24051, 24362, null], [24362, 24836, null], [24836, 25040, null], [25040, 25130, null], [25130, 25228, null], [25228, 25349, null], [25349, 25481, null], [25481, 28215, null], [28215, 28503, null]], "google_gemma-3-12b-it_is_public_document": [[0, 35, true], [35, 382, null], [382, 382, null], [382, 1120, null], [1120, 2524, null], [2524, 3982, null], [3982, 4324, null], [4324, 4324, null], [4324, 4408, null], [4408, 4514, null], [4514, 4534, null], [4534, 4572, null], [4572, 4610, null], [4610, 5598, null], [5598, 5978, null], [5978, 8147, null], [8147, 8147, null], [8147, 8529, null], [8529, 9111, null], [9111, 9342, null], [9342, 9755, null], [9755, 9840, null], [9840, 10010, null], [10010, 10010, null], [10010, 11043, null], [11043, 11811, null], [11811, 11891, null], [11891, 11891, null], [11891, 11923, null], [11923, 12789, null], [12789, 13834, null], [13834, 15474, null], [15474, 16604, null], [16604, 16604, null], [16604, 16604, null], [16604, 16604, null], [16604, 16604, null], [16604, 16787, null], [16787, 17015, null], [17015, 17254, null], [17254, 17498, null], [17498, 17681, null], [17681, 17864, null], [17864, 18100, null], [18100, 18419, null], [18419, 19306, null], [19306, 19502, null], [19502, 19900, null], [19900, 20200, null], [20200, 20429, null], [20429, 20811, null], [20811, 21093, null], [21093, 21397, null], [21397, 21653, null], [21653, 21837, null], [21837, 22177, null], [22177, 22245, null], [22245, 22423, null], [22423, 22699, null], [22699, 23047, null], [23047, 23486, null], [23486, 23770, null], [23770, 24051, null], [24051, 24362, null], [24362, 24836, null], [24836, 25040, null], [25040, 25130, null], [25130, 25228, null], [25228, 25349, null], [25349, 25481, null], [25481, 28215, null], [28215, 28503, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28503, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28503, null]], "pdf_page_numbers": [[0, 35, 1], [35, 382, 2], [382, 382, 3], [382, 1120, 4], [1120, 2524, 5], [2524, 3982, 6], [3982, 4324, 7], [4324, 4324, 8], [4324, 4408, 9], [4408, 4514, 10], [4514, 4534, 11], [4534, 4572, 12], [4572, 4610, 13], [4610, 5598, 14], [5598, 5978, 15], [5978, 8147, 16], [8147, 8147, 17], [8147, 8529, 18], [8529, 9111, 19], [9111, 9342, 20], [9342, 9755, 21], [9755, 9840, 22], [9840, 10010, 23], [10010, 10010, 24], [10010, 11043, 25], [11043, 11811, 26], [11811, 11891, 27], [11891, 11891, 28], [11891, 11923, 29], [11923, 12789, 30], [12789, 13834, 31], [13834, 15474, 32], [15474, 16604, 33], [16604, 16604, 34], [16604, 16604, 35], [16604, 16604, 36], [16604, 16604, 37], [16604, 16787, 38], [16787, 17015, 39], [17015, 17254, 40], [17254, 17498, 41], [17498, 17681, 42], [17681, 17864, 43], [17864, 18100, 44], [18100, 18419, 45], [18419, 19306, 46], [19306, 19502, 47], [19502, 19900, 48], [19900, 20200, 49], [20200, 20429, 50], [20429, 20811, 51], [20811, 21093, 52], [21093, 21397, 53], [21397, 21653, 54], [21653, 21837, 55], [21837, 22177, 56], [22177, 22245, 57], [22245, 22423, 58], [22423, 22699, 59], [22699, 23047, 60], [23047, 23486, 61], [23486, 23770, 62], [23770, 24051, 63], [24051, 24362, 64], [24362, 24836, 65], [24836, 25040, 66], [25040, 25130, 67], [25130, 25228, 68], [25228, 25349, 69], [25349, 25481, 70], [25481, 28215, 71], [28215, 28503, 72]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28503, 0.15683]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6abf9c2edaff455df15eb54a977c66d39058007c
DiffTune: A Diffusion-Based Approach to Diverse Instruction-Tuning Data Generation Abstract Instruction tuning has become pivotal in enhancing the adaptability and responsiveness of Large Language Models (LLMs) to human instructions. Despite its critical role, current methods for generating instruction-tuning datasets exhibit significant bottlenecks, primarily in terms of high cost and limited diversity. However, as previously shown in the literature, the diversity of an instruction-tuning dataset is crucial to LLM’s downstream performance. To address these challenges, we propose a Diffusion Language Model (DiffLM)-based technique to generate unlimited diverse instructions at a low cost. Specifically, we have enhanced the variability of instructions by strategically modifying the sampling process within the DiffLM. Our method presents the opportunity to augment any existing instruction-tuning dataset, thereby enriching its content and potential utility. Both automatic and human evaluation show that our generated instructions achieve high quality and better n-gram diversity than the original dataset. Instruction tuning of LLaMA on the augmented dataset delivers better instruction following capability and superior performance on a broad set of benchmarks, indicating the effectiveness of our instruction generation method. 1 Introduction Large Language Models (LLMs), particularly following the advent of ChatGPT, have seen a surge in popularity due to their impressive performance capabilities [1][2]. To maximize LLMs’ potential and adapt pre-trained models to specific domains or downstream tasks, instruction tuning emerges as an indispensable step [3][4]. It involves the generation of bespoke datasets that guide Large Language Models (LLMs) to respond more effectively to human instructions across varying tasks and domains. Existing instruction-tuning techniques generally fall into two categories: human-labeled and machine-generated approaches. Human-labeled methods [3][5] are highly accurate and contextually rich, but is difficult to scale up and expensive to procure. Machine-generated techniques Wang et al. [6], Peng et al. [7], Honovich et al. [8] are easily scalable but lack the necessary diversity, creating a gap between the instructions in the dataset and real-world user prompts. Also, generating datasets by querying commercial language models also involves costs, which could be substantial [9][7]. The inherent limitations of existing instruction-tuning techniques underscore the need for a more effective, scalable, and cost-efficient approach, forming our research’s central theme. Given the aforementioned challenges, we propose DiffTune, a novel data generation technique utilizing the Diffusion Language Model (DiffLM). The diffusion model, as a kind of generative model, works by simulating a process of random walks from a simple initial distribution toward the target distribution, resulting in nuanced and detailed data generation. Building upon this foundation, DiffTune innovatively leverages the inherent properties of a Diffusion Language Model (DiffLM). By replacing the original sampling strategy within the DiffLM with our topic diversity enhancing sampling, DiffTune is imposed to generate high-quality instruction-tuning datasets with enhanced diversity at a lower cost. We put our novel DiffTune-generated dataset to the test by finetuning an accessible LLM, LLaMA [10]. Our methodology involved augmenting or replacing the original datasets with the instructions generated by DiffTune. The LLaMA model, when finetuned with our DiffTune-generated dataset, displayed a remarkable increase in its instruction-following capabilities in terms of higher validity and human preference. This underscores the potential of DiffTune to optimize LLM performance cost-effectively. 2 Data Collection for DIFFIT This section introduces the fully automatic collection process of our instruction tuning dataset DIFFIT. The overall data collection process is shown in Figure 1. 2.1 Diffusion Language Model Training Given an existing instruction-tuning dataset \( \{(X^e_t, Y^e_t)\} \), where \( X^e_t \) and \( Y^e_t \) are the instruction and output of an instance in the dataset, we train a DiffLM on its instruction set \( \{X_t\} \) as described previously. After this stage, we obtain a trained DiffLM, denote it \( M_{\text{Diff}} \). We can now sample from it: 1) sample a length from the empirical length distribution of the existing instruction set \( l_i \sim L(\{X_t\}) \), and 2) sample a noise \( Z_T \in \mathbb{R}^{l_i \times d} \sim N(0, I) \). The generation \( M_{\text{Diff}}(Z_T) \) is a sequence in the same domain of the original instruction set \( \{X_t\} \). 2.2 Diffusion Language Model Sampling We sample from the trained \( M_{\text{Diff}} \) to generate new, diverse, and high-quality instructions. Lovelace et al. [11] demonstrated that DiffLM could generate diversified text sequences with low memorization of its training set when using a noise sampled from \( N(0, I) \). In our method, we further increase the diversity of our sampled instruction set by adopting an innovative sampling strategy to cover the rare topics, concepts, and formats mapped to the long tail of the sampling noise distribution. Inspired by the in-breadth evolving strategy mentioned in Xu et al. [12], we propose the topic diversity enhancing sampling strategy. After sampling the noise from a standard Gaussian \( Z_T \in \mathbb{R}^{l_i \times d} \sim N(0, I) \), we randomly select 30% of the tokens and sample them from a distribution of a much higher variance \( N(0, 10I) \). This strategy resembles the process of randomly inserting rare tokens into the sequence. With the remaining 70% originally sampled tokens to control the overall format and \( M_{\text{Diff}} \)'s powerful BART decoder, the generation’s quality is only slightly compromised. The post-processing step can mitigate the slightly lower generation quality. Figure 1: The dataset collection process of DiffTune. <table> <thead> <tr> <th>Dataset</th> <th>Size</th> <th>Avg. len.</th> <th>#Unique Tokens</th> <th>4-gram Rep.↑</th> <th>4-gram SelfBLEU↓</th> </tr> </thead> <tbody> <tr> <td>Unnat. Inst.</td> <td>68478</td> <td>93.92</td> <td>42455</td> <td>0.61</td> <td>0.85</td> </tr> <tr> <td>Self-Instruct</td> <td>82439</td> <td>35.59</td> <td>19920</td> <td>0.70</td> <td>0.90</td> </tr> <tr> <td>Alpaca</td> <td>52002</td> <td>22.79</td> <td>21027</td> <td>0.43</td> <td>0.69</td> </tr> <tr> <td>GPT4-Alpaca</td> <td>52002</td> <td>22.79</td> <td>21027</td> <td>0.43</td> <td>0.69</td> </tr> <tr> <td>Code-Alpaca</td> <td>20022</td> <td>29.90</td> <td>8671</td> <td>0.55</td> <td>0.79</td> </tr> <tr> <td>OASST1</td> <td>55668</td> <td>25.09</td> <td>36150</td> <td>0.83</td> <td>0.96</td> </tr> <tr> <td>S.A.D.</td> <td>52000</td> <td>30.24</td> <td>55455</td> <td>0.19</td> <td>0.51</td> </tr> <tr> <td>DIFFIT</td> <td>52000</td> <td>28.64</td> <td>49322</td> <td>0.14</td> <td>0.49</td> </tr> </tbody> </table> Table 1: Evaluation of existing similarly-sized datasets’ input instructions in terms of different diversity metrics. S.A.D.: 52K instructions sampled from the concatenation of ShareGPT + Alpaca + Dolly’s instruction set with stratified sampling. ↑: Higher is better. ↓: Lower is better. The best and second-best results are labeled in **bold** and *underline*, respectively. ## 2.3 Instruction Post-Processing In this step, we filter out the sampled instructions from $M_{Diff}$ with a perplexity threshold. We use GPT2-Large to compute the perplexity. This simple yet effective post-processing strategy can drastically decrease the average perplexity of the generated dataset by four times. However, since perplexity computation largely depends on the evaluation model, which is not explicitly pre-trained on the instruction domain, this process potentially filters out some valid but highly diversified instructions. We leave developing a better post-processing strategy for this process as future work. ## 2.4 Instruction Output Generation After obtaining a predefined number of valid instructions with the previous steps, we generate the output by prompting an existing LLM. We iterate over all generated instructions, prompt the LM with the instruction, and collect the LLM’s response. We filter out the instructions with an invalid response (e.g., the LLM’s output contains no helpful information or deems the instruction as a not self-contained sequence). The remaining instruction-output pairs form our instruction-tuning dataset. ## 3 Instruction Data Analysis We apply the method introduced in Section 2 to the concatenation of three open-source datasets’ instruction sets: ShareGPT[^1], Dolly [5] and Alpaca [13]. The combined dataset ShareGPT-Alpaca-Dolly (S.A.D.) contains 107,442 instructions. We sample 1000 instructions from it with stratified sampling as the test set, while the remaining 106,442 instructions are used for training DiffLM. We sampled an instruction set with a DiffLM trained on the joint S.A.D.’s training set, using a BART-Large as its decoder. The output for each instruction was collected with gpt-3.5-turbo. The resulting dataset contains 52,000 diverse instructions with high-quality outputs. We name our dataset DIFFIT (Diffusion-based Instruction Tuning dataset). We compare our dataset DIFFIT with several open-sourced instruction tuning datasets in Table 1 in terms of instruction diversity. Among the similar-sized datasets, S.A.D. and our DIFFIT achieves the highest unique token counts. Although DIFFIT has lower unique token counts compare to S.A.D., it achieves a better $n$-gram diversity in terms of 4-gram Repetition and 4-gram SelfBLEU. This suggests that compared to DiffLM’s training instruction set, the sampled instructions from DiffLM can cover more new complex concepts or phrases. ## 4 Instruction Tuning Experiments We conduct instruction-tuning on a pre-trained LLM, LLaMA [10] with our sampled DIFFIT dataset. In this section, we compare a LLaMA 7B fine-tuned on DIFFIT with LLaMA 7B finetuned on similar-sized instruction-tuning datasets with the same training settings. [^1]: We use an open-source version of ShareGPT from https://hf.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered <table> <thead> <tr> <th>Model &amp; Dataset</th> <th>MMLU</th> <th>GSM</th> <th>Codex-HumanEval</th> <th>TydiQA</th> </tr> </thead> <tbody> <tr> <td></td> <td><strong>Avg.</strong></td> <td>0-shot</td> <td>5-shot</td> <td>CoT</td> </tr> <tr> <td>LLaMA 7B [10]</td> <td>31.9</td> <td>35.2</td> <td>6.0</td> <td>9.0</td> </tr> <tr> <td>+ Unnat. Inst. [8]</td> <td>42.9</td> <td>38.1</td> <td>3.5</td> <td>5.0</td> </tr> <tr> <td>+ Self-Instruct [6]</td> <td>35.7</td> <td>33.2</td> <td>4.0</td> <td>6.5</td> </tr> <tr> <td>+ Alpaca [13]</td> <td>41.5</td> <td>40.3</td> <td>7.0</td> <td>10.0</td> </tr> <tr> <td>+ GPT4-Alpaca [7]</td> <td>42.6</td> <td>38.3</td> <td>6.5</td> <td>10.0</td> </tr> <tr> <td>+ Code-Alpaca [14]</td> <td>34.7</td> <td>34.5</td> <td>6.5</td> <td>10.0</td> </tr> <tr> <td>+ OASST1 [15]</td> <td>32.9</td> <td>29.7</td> <td>6.0</td> <td>6.5</td> </tr> <tr> <td>+ S.A.D.</td> <td>37.4</td> <td>27.3</td> <td>5.5</td> <td>14.0</td> </tr> <tr> <td>+ DIFFIT</td> <td>39.7</td> <td>33.6</td> <td>7.5</td> <td>14.5</td> </tr> <tr> <td>+ S.A.D. (2x Training)</td> <td>38.7</td> <td>30.5</td> <td>4.5</td> <td>13.5</td> </tr> <tr> <td>+ S.A.D.+DIFFIT</td> <td>40.6</td> <td>32.7</td> <td>6.5</td> <td>14.5</td> </tr> </tbody> </table> The results on a suite of automatic evaluations on LLM’s general capability are shown in Table 2. Instruction tuning on DIFFIT achieves better performance than instruction-tuning on similarly-sized DiffLM training set S.A.D. and augmenting S.A.D. with the generated dataset can further improve LLM’s general performance. This suggests that DiffTune can generate high-quality instructions that can augment existing diverse instruction-tuning datasets to increase LLM’s general capabilities. We show the automatic evaluation on AlpacaFarm in Figure 2. We find that although a mixture of 52K S.A.D. instructions is itself a diverse dataset, DIFFIT sampled by DiffTune further increases the response quality by achieving a higher win rate against text-davinci-003. S.A.D. augmented by DIFFIT achieves a 2.2% win rate increase compared to LLaMA trained on S.A.D. with the same training steps. This shows the effectiveness of DiffTune as both an instruction-tuning generation method and an instruction set augmentation approach. Lastly, we conduct a human evaluation of the validity and helpfulness of LLM’s response to real-world human instructions. In Figure 3, we illustrate the percentage of valid responses evaluated by human evaluators. The original 52K S.A.D. dataset achieves better response validity when augmented with DIFFIT, achieving an increase of 1.3 percentage points brought by our method. 5 Conclusion We introduce DiffTune, a novel method for generating instruction-tuning datasets that overcome the limitations of current techniques. By leveraging the capabilities of Diffusion Language Models and revising the sampling strategy, DiffTune generates diverse, high-quality instruction-tuning datasets in a cost-effective manner. The superior performance of the LLaMA model, when finetuned with our DiffTune-generated dataset, emphasizes the efficacy of our approach. Both automatic and human evaluations underscore the quality and diversity of the data generated by DiffTune, showcasing its potential to optimize the performance of Large Language Models across varied tasks and domains. References [10] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Thimothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, A Related Works Instruction-Tuning Datasets Recently, instruction tuning on LLMs has been a hot research area in NLP [6, 4, 17, 18]. The dataset used for LLM instruction tuning has to be diverse to cover various tasks, scenarios, and input formats. To guarantee the diversity requirement of the dataset, previous literature generates their instruction data from a variety of existing NLP datasets [3, 4, 17, 19, 20], various online forums [21], human labelling [5], crowdsourcing [15, 22], and generation by existing (proprietary) LLMs [6, 13, 7, 8, 12, 23]. However, LLM-generated instruction tuning datasets frequently suffer from lower diversity and less authenticity than human-generated datasets, and human-labeled datasets have a high cost for dataset generation. Although crowd-sourcing-based instruction tuning datasets achieve better variety and can best reflect real-world user prompts, collecting a large set of user inputs is still costly. This paper introduces a cost-effective method of instruction-tuning dataset generation that can construct or augment an existing dataset with crowdsourcing-level quality and diversity with no extra human labeling. Diffusion Model for Language Generation Different from the mainstream auto-regressive language models (ARLM) [1, 10] which generates texts token by token, diffusion language models (DiffLM) fall into the category of non-auto-regressive language models (NARLM), which generate all tokens in parallel. Diffusion LM was first introduced by Li et al. [24], where the authors trained a diffusion model in the token embedding space. DiffLM has been applied to controllable text generation [24, 25], unconditional text generation [26, 28] and sequence-to-sequence tasks [29, 32]. On the task of unconditional generation, compared to ARLM, DiffLM can achieve more robust and efficient text sequence generation [33] and higher generation diversity [11]. This paper adopts the DiffLM proposed by Lovelace et al. [11] for diverse and high-quality instruction generation. B Backgrounds on Diffusion Language Models Diffusion models [34, 35] aim to approximate a target distribution \( p(x) \) by learning a reversible transition between it and a Gaussian distribution. The forward process takes a sample from the target distribution \( \{z_0, z_1, \ldots, z_T\} \), where \( q(z_{t+1} | z_t) = \mathcal{N}(\sqrt{1 - \beta_t} z_t, \beta_t I) \) with some variances \( \beta_t \). The inversion of the forward process is called denoising, where one samples from Gaussian distribution \( z_T \sim \mathcal{N}(0, I) \) and sequentially produces a chain of less noisy samples \( \{z_T, z_{T-1}, \ldots, z_0\} \), where the final element \( z_0 \) is a sample from the target distribution \( p(x) \). For that, one trains the denoising neural network \( f_\theta(z_t, t) \), which approximately recovers the original sample from target distribution, \( x \), from its noisy version \( z_t \). Specifically, for any \( x \sim p(x) \) and any time step \( t \), one generates the noisy sample \( q(z_t | x) \) with a forward process, and recovers it so that \( x_\theta \approx x \). When the denoising network is trained, one can generate samples from the target distribution using the denoising Markov chain described above. Going from \( z_t \) to \( z_{t-1} \) requires sampling from the distribution \( p(z_{t-1} | z_t) := \mathcal{N}(\mu_t(z_t, x_\theta), \sigma^2_t I) \), where \( \mu_t(z_t, x) \) has a closed form solution. Applying diffusion models to NLP is not straightforward because of the discrete nature of language. We use the model suggested by Lovelace et al. [11], where they modeled the latent space of the encoder-decoder language model with diffusion. In particular, they used BART [36], because it was trained with the denoising objective on the latent representation. Hence, the approximate samples from the diffusion model would be meaningfully decoded. Instead of learning to generate a sample from the latent space, one needs to sample a sequence of vectors, which would be ideally decoded into a valid sentence. For that, given a length \( l \) and a sampled noise \( Z_T \in \mathbb{R}^{l \times d} \sim \mathcal{N}(0, I) \), the denoising \( X_T \) is a sequence in the target domain. C Instruction Data Generation Analysis We apply DiffTUNE to different existing instruction tuning datasets, different DiffLM decoder size and different sampling strategies. We analyze these three aspects of instruction generation settings one by one in the following subsections. 9 Table 3: The comparison of sample diversity and quality among different DiffLM decoder selection and DiffLM sampling strategies based on 1000 sampled instructions. ↑: Higher is better. ↓: Lower is better. The setting with the highest performance for each metric is labeled in bold, while the second-highest is labeled in underline. C.1 Experiment Details To maximize the diversity of the sampled new instructions from DiffLM, we train our DiffLM on the concatenation of the following open-source datasets’ instruction set: 1) ShareGPT. It is collected from publically-shared real-world user dialogues with ChatGPT. We filtered the dataset only to contain first-round user inputs with a valid response from ChatGPT. The filtered dataset contains 40,428 instructions. 2) Dolly. It contains 15,011 instructions created by Databricks employees. 3) Stanford Alpaca. It is generated with an modified self-instruct strategy [6] using text-davinci-003, containing 52,002 instructions. The combined dataset ShareGPT-Alpaca-Dolly (S.A.D.) contains 107,442 instructions. We sample 1000 instructions from it with stratified sampling as the test set, while the remaining 106,442 instructions are used for training our DiffLM. In this section, we train two DiffLMs with different DiffLM decoder sizes: BART-Base or BART-Large. After training, we test four different sampling strategies: 1. **Standard Gaussian.** We sample from the DiffLM with \( Z_T \in \mathbb{R}_{l_i \times d} \sim \mathcal{N}(0, I) \). 2. **Student T.** We sample from the DiffLM with \( Z_T \in \mathbb{R}_{l_i \times d} \sim t_2 \). The noise distribution puts a higher probability on the long tail compared to the original strategy. 3. **30% Higher Variance.** We apply the sampling strategy introduced in **DIFFTUNE**. 4. **100% Higher Variance.** We sample from a Gaussian with a higher variance for all tokens: \( Z_T \in \mathbb{R}_{l_i \times d} \sim \mathcal{N}(0, 10I) \). We sample 1000 instructions from each configuration with beam search. The sampled instruction set will be evaluated with the following metrics: 1. **Repetition** [37] measures generation diversity by the proportion of repetitive n-grams: \[ rep_n = \left( 1 - \frac{\text{unique} - \text{n-grams}(z)}{\text{total} - \text{n-grams}(z)} \right) \]. 2. **n-gram Diversity** [37] measures generation diversity by considering different n-gram repetitions: \[ \text{div} = \prod_{n=2}^{4} \left( 1 - \frac{\text{rep}_n}{100} \right) \]. 3. **SelfBLEU** [38] measures generation diversity by computing the average of each generated instance’s BLEU score against all others. 4. **Memorization** [11] measures generation diversity by computing the proportion of 4-grams from the generated sequences that exist in the training set. 5. **MAUVE** [39] measures generation quality by considering the token distribution closeness between the generated and reference sets. We compute the MAUVE score against S.A.D.’s test set. 6. **Perplexity (ppl)** measures generation quality by how likely a language model can generate the sequence. We compute ppl with GPT2-Large. C.2 Comparisons of Training Settings We sample 1000 instructions from each setting with an applied perplexity threshold of 150, which aligns with our final instruction generation process. We compare the diversity and quality of the generated instruction set with S.A.D.’s test set. The results are shown in Table 3. **DiffLM’s decoder model size.** When the instructions are sampled from standard Gaussian, using a smaller BART-Base as DiffLM’s decoder achieves slightly on-par or better diversity and quality across all metrics except for perplexity, which aligns with the observation from Lovelace et al. [11]. However, when using a different sampling strategy, using a larger decoder illustrates a different trend. For generation quality, when using the same sampling strategy, BART-Large settings always achieve a lower perplexity compared to its BART-Base counterpart. The higher generation quality from a BART-Large decoder is also observed during our case studies. For generation diversity, when using a sampling distribution distant from a standard Gaussian, using BART-Large generally achieves a higher diversity gain. **DiffLM’s sampling strategy.** Although all tested settings achieve a higher diversity across all metrics compared to S.A.D.’s test set, we found that using a noise distribution other than standard Gaussian always achieves a more diversified instruction generation. Although sometimes the generated instructions include grammatical errors or unknown concepts, they can be denoised in the output generation process by larger LLMs and can better resemble real-world user inputs, where the prompts are not always perfect. We provide a simple case study of using different percentages of high-variance noises in Table 4. We begin with a noise matrix $Z_T \sim \mathcal{N}(0, I)$, and gradually substitute a specific percentage of its column vectors with sampled vectors from $\mathcal{N}(0, I)$ and observe the decoded output. We keep sampling until the generation achieves a perplexity below 150, which takes around 2 rounds of generation. We observed that the generated sequences are all in the format of instructions, while a higher percentage of vectors sampled with higher variance is more likely to introduce grammatical errors (e.g., “i am”) or unknown concepts (e.g., “Iafkenhoek”). This phenomenon resembles real-world user inputs since similar grammatical errors and typos are common in ShareGPT’s instruction set. In our dataset generation process, we adopt the setting of using a BART-Large decoder and a 30% Higher Var sampling strategy. <table> <thead> <tr> <th>% Tokens Sampled With Higher Var.</th> <th>DiffLM Generation</th> </tr> </thead> <tbody> <tr> <td>0%</td> <td>Could you list few 10 most important things to prepare for the entrance examination? Think about the factors in order to determine your aptitude. Please write in English language.</td> </tr> <tr> <td>10%</td> <td>I am an employee at a large endo wrapping company in Hengshui, China. Give me some suggestions for a new cover letter and resume. I would love to have a good job description.</td> </tr> <tr> <td>30%</td> <td>Make me a list of things for the course I should do and be prepared. I’m doing a user design course, I am preparing for a class, but I don’t know what to do about it.</td> </tr> <tr> <td>50%</td> <td>Write a poem in the style of Iafkenhoek explaining how humans will overcome a number of factors of mental and emotional goals, which may or may not be attainable. Create a short film about your dreams for humanity.</td> </tr> </tbody> </table> Table 4: A sample from DiffLM trained on S.A.D.’s instruction set with a BART-Large decoder. We begin with a noise matrix $Z_T \sim \mathcal{N}(0, I)$, and gradually substitute column vectors of $Z_T$ with vectors sampled from $\mathcal{N}(0, 10I)$ and observe the corresponding changes for its decoded generation. ### Statistics <table> <thead> <tr> <th>Statistics</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td># of data</td> <td>52000</td> </tr> <tr> <td># of unique input tokens</td> <td>49322</td> </tr> <tr> <td># of unique output tokens</td> <td>74079</td> </tr> <tr> <td>Avg. input length (in words)</td> <td>24.4</td> </tr> <tr> <td>Avg. output length (in words)</td> <td>80.7</td> </tr> </tbody> </table> Table 5: Basic statistics of the DIFFIT dataset. **Figure 4:** The most common root verb-noun combinations in DIFFIT’s instruction set. The inner circle illustrates the root verbs, while the outer circle illustrates the corresponding direct nouns. ### D Statistics of DIFFIT #### D.1 Basic Statistics We include the basic statistics of DIFFIT in Table 5. #### D.2 Verb-Noun Analysis Following previous practices of instruction diversity analysis [6, 7], we analyze the most common verb-noun combinations in the sampled instructions. We extract the root verb and their corresponding direct-object noun of each instruction and plot the verb-noun combinations with a frequency higher than 10 in Figure 4. We observe a large variety in the verbs used in the dataset, with the instructions covering different generation types, including story, code, table, list, etc. It is also worth noticing that the verb “use” appears frequently in the instructions, which is usually intended to add specifications to the task scenario (e.g., use a specific library or use a particular tool), which resemble typical real-world user inputs to LLMs. ### Table 6: Dataset construction cost of several existing open-source instruction-tuning datasets with similar sizes. <table> <thead> <tr> <th>Dataset</th> <th>Size</th> <th>Cost</th> </tr> </thead> <tbody> <tr> <td>Self-Instruct</td> <td>82K</td> <td>$600</td> </tr> <tr> <td>Unnatural Instructions</td> <td>68K</td> <td>$1370</td> </tr> <tr> <td>Alpaca</td> <td>52K</td> <td>$500</td> </tr> <tr> <td>GPT4-Alpaca</td> <td>52K</td> <td>$456</td> </tr> <tr> <td>DIFFIT</td> <td>52K</td> <td>$27.8</td> </tr> </tbody> </table> Table 6: Dataset construction cost of several existing open-source instruction-tuning datasets with similar sizes. ![Preference Table](chart.png) Figure 5: Human preference for two pairs of models. ### D.3 Data Generation Cost We compare DIFFIT’s total construction cost with that of several open-source instruction-tuning datasets in Table 6. A DiffLM with a BART-large decoder is easy to train and implement on consumer-level servers or computers. Compared to self-instruct [6], our method substitutes the LLM-based instruction generation step with DiffLM sampling, reducing the dataset construction cost while enhancing the dataset’s diversity in different aspects. Thanks to the low cost of gpt-3.5-turbo’s API call, we further reduce the cost of output generation. ### E Human Preference Evaluation We show human preference results in Figure 5. LLaMA 7B instruction-tuned on dataset augmented by DiffTune is favored 42.9% of the times when compared to the counterpart model tuned only on S.A.D, which is only favored 25.4% of the time. Although LLaMA 7B instruction-tuned on S.A.D.+DIFFIT is still far from comparable with ChatGPT, our generated responses are still favored 18.4% of the time, despite the large discrepancy of model size and training cost. ### F Instruction Tuning Experiment Details Our evaluation closely follows previous general-purpose LLM’s settings [40, 6, 22, 41, 16], where the evaluation covers different aspects of LLM’s general ability as well as instruction-following capability. **Evaluations on LLM’s general capability.** We compare the same LLM finetuned on different instruction tuning datasets on the following benchmarks: 1) **MMLU** [42] for factual knowledge evaluation, which contains multiple-choice questions from 57 subjects covering different difficulties. 2) **GSM** [43] for mathematical reasoning, which contains grade school-level math problems. 3) **TyDiQA** [44] for multilingual evaluation, which contains machine reading comprehension or question-answering tasks in 11 typologically diverse languages. 4) **Codex-HumanEval** [45] for coding evaluation, which requires the model to generate code given a docstring. For all benchmarks, we follow the evaluation setting of [16], except that we use Alpaca’s dialogue template instead of Tulu’s. <table> <thead> <tr> <th>Hyperparameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>#Trainable Params</td> <td>214M</td> </tr> <tr> <td>Max Seq Length</td> <td>64</td> </tr> <tr> <td>Diffusion Steps</td> <td>1000</td> </tr> <tr> <td>Noise Schedule</td> <td>Linear</td> </tr> <tr> <td>Regression Loss</td> <td>L1</td> </tr> <tr> <td>DiffLM Transformer Layers</td> <td>12</td> </tr> <tr> <td>DiffLM Transformer Dim.</td> <td>768</td> </tr> <tr> <td>Optimizer</td> <td>AdamW [2019]</td> </tr> <tr> <td>Learning Rate</td> <td>1e-4</td> </tr> <tr> <td>Batch Size</td> <td>64</td> </tr> <tr> <td>Warmup Steps</td> <td>1000</td> </tr> <tr> <td>Learning Rate Schedule</td> <td>Cosine Decay</td> </tr> <tr> <td>Weight Decay</td> <td>1e-6</td> </tr> <tr> <td>Dropout</td> <td>0.0</td> </tr> <tr> <td>Gradient Clipping</td> <td>1.0</td> </tr> <tr> <td>EMA Decay</td> <td>0.9999</td> </tr> <tr> <td>Iterations</td> <td>300K</td> </tr> </tbody> </table> Table 7: Hyperparameter settings for training and sampling DiffLM. Evaluations on LLM’s instruction-following capability. We evaluate LLM’s instruction-following capability by comparing different model’s outputs to real-world user inputs. Both automatic and human evaluations are conducted to evaluate the helpfulness and validity of LLM’s response: 1) AlpacaFarm [41] for automatic evaluation on instruction-following capabilities, which uses GPT-4 to compare an LLM’s generation with text-davinci-003’s generation on 805 instructions. 2) VicunaEval [22] for human evaluation on instruction-following capabilities, which contains 80 instructions covering a wide range of scenarios. Human Evaluation. Human evaluation on VicunaEval covers two aspects: 1) Answer validity, where we ask evaluators to decide whether an LLM’s response is acceptable; and 2) Pairwise comparison, where we ask evaluators to compare two LLM’s response to the same instruction, then score their preference based on a 5-level scale to decide if one response is much better, slightly better or on-par compare to the other. The instruction for human evaluation comes from Vicuna’s evaluation set [22], which contains 80 instructions covering multiple daily scenarios. We generate responses from LLaMA 7B + S.A.D. (2x), LLaMA 7B + S.A.D. + DIFFIT and ChatGPT for each instruction, and ask human evaluators to 1) Evaluate the validity of the model’s response, and 2) Select the preferred response from two different model’s generations. We conduct two pairs of preference evaluations: LLaMA 7B + S.A.D. + DIFFIT vs. LLaMA 7B + S.A.D. (2x), and LLaMA 7B + S.A.D. + DIFFIT vs. ChatGPT. The evaluation is conducted with Google Forms. We split each pair of comparisons into two forms, with 40 instructions in each form. In each form, the ordering of the questions and the responses are totally randomized. We recruited 12 experienced annotators who are fluent in English and hold bachelor’s degrees or above. Each form will be assigned to three different evaluators. We show an example of our evaluation form in Figure 6. G Experiment Details for DiffLM Training For all DiffLM experiments, we train the DiffLM on S.A.D.’s training set using the hyperparameter settings shown in Table 7. The generation configuration for the BART decoder is shown in Table 8. Each setting is trained on a single NVIDIA V100 32G. ### Table 8: Generation configuration for DiffLM’s BART decoder. <table> <thead> <tr> <th>Setting</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>max_length</td> <td>64</td> </tr> <tr> <td>min_length</td> <td>8</td> </tr> <tr> <td>num_beams</td> <td>10</td> </tr> <tr> <td>do_sample</td> <td>False</td> </tr> <tr> <td>no_repeat_ngram_size</td> <td>2</td> </tr> <tr> <td>repetition_penalty</td> <td>4.0</td> </tr> </tbody> </table> We show some of the instances from the DIFFIT Dataset in Table 9. Please note that the table includes examples that collect hallucinating results from ChatGPT, although the answer’s format is aligned with our instruction-tuning objective. Tell me about the advantages and disadvantages of different programming languages. Add a bullet point summary explaining the pros and cons of each language. Give me a brief explanation of the difference between Swift and Objective-C. Show me how to use C# instead of C++. Tell me why. Different programming languages have unique advantages and disadvantages. For example, C++ is fast and flexible, but difficult to learn. Java is platform-independent, but slower than some languages. Python has simple syntax, but is slower compared to other languages. Swift is faster than Objective-C, but still somewhat new and has limited library support. C# is a cleaner version of C++ with better memory management, but it is not as fast as C++. To use C# instead of C++, you would need to download and install Microsoft’s Visual Studio, a software development environment. C# is often used for web, Windows, and game development. Title: “Galactic Rift” In a distant future, humanity has mastered space travel and colonization across multiple galaxies. But as the population soars, a powerful cult arises, vowing to cleanse the universe of all non-believers and establish a new order. Our protagonist, a small group of renegade fighters led by a young woman named Nova, must travel to the cult’s stronghold and stop their plans before it’s too late. Along the way, they will face alien creatures, rogue robots, and the cult’s elite army. As they progress, they discover that the cult is not what they seem - it’s led by ancient beings who seek to resurrect their god and conquer all life in the universe. Nova and her group must decide whether to continue their mission or join forces with the unlikely allies they meet along the way. Through action-packed battles, moral quandaries, and unexpected twists, “Galactic Rift” explores themes of power, faith, and the true meaning of humanity. This edgy sci-fi/fantasy series is perfect for middle schoolers who want an adventure that will leave them on the edge of their seats. As an AI concept teacher, I explain that AI refers to the simulation of human intelligence in machines to perform tasks that typically require human cognitive abilities. AI can help businesses streamline processes, make data-driven decisions, and improve customer experiences. AI-powered tools can be integrated into a workplace to enhance productivity, automate repetitive tasks, and improve the accuracy of decision-making. As an AI concept user, I have successfully leveraged AI to automate data analysis and improve customer segmentation for a retail client, resulting in a 20% increase in sales. To prepare a job resume for a Chinese independent IT consultant and an IT contractor who works for the USA as a CPA consultant in China, highlight relevant certifications, degrees, and experience in the technology and finance industry. Emphasize strong communication skills, understanding of Western business practices, and proficiency in Chinese and English. Showcase prior project success, specific technical skills, and collaboration abilities. As an IT consultant or contractor, showcase relevant project management experience and expertise in the latest technologies that support business solutions. Additionally, mention how you have developed and scaled businesses in both the US and China markets. Be brief and clear about your most significant accomplishments, abilities, and contributions to companies you have worked with. Sure, to learn Kuolang, you need to start by downloading an IDE that supports the language, such as VS Code. Then, you can learn the basics of the language, such as its syntax and data types. There are many resources available, including online courses, documentation, and tutorials. Once you have a basic understanding of the language, practice coding exercises to reinforce your skills. Finally, when you feel confident, try building a project from scratch to solidify your knowledge. Good luck! Question 2: What if the Beatles had never formed as a band? Answer A: If the Beatles had never formed as a band, the music scene in the 1960s and beyond would have been drastically different. It’s likely that other bands and artists would have risen to take their place, but the impact and influence of the Beatles on popular music cannot be overstated. Their innovative sound, thoughtful lyrics, and charismatic personalities helped to shape the direction of music at the time. The Beatles also served as a cultural icon, influencing fashion, art, and societal attitudes. Without the Beatles, the music industry and youth culture may have taken a different direction. Answer B: The world would have been different without the Beatles. They were one of the most influential bands in music history and their impact can still be felt today. Their iconic hits like “Yesterday” and “Hey Jude” have become timeless classics, and their influence on pop culture extends far beyond music. Without the Beatles, there may not have been a British Invasion and other similar musical movements that followed. The world may have missed out on some of the most memorable songs and performances of all time. It’s hard to imagine what our world would look like without the Beatles, but we can certainly appreciate them for the incredible legacy they left behind. Is Answer A a valid response? A valid response should: (1) Answer the question; (2) Have no significant errors; (3) Have no meaningless text (e.g., repetition). ☐ Yes ☐ No Is Answer B a valid response? A valid response should: (1) Answer the question; (2) Have no significant errors; (3) Have no meaningless text (e.g., repetition). ☐ Yes ☐ No Out of the two answers, which one do you prefer as a better answer to the question? 1 2 3 4 5 ☐ A is clearly better ☐ ☐ ☐ ☐ ☐ B is clearly better Figure 6: The form used for human evaluation.
{"Source-Url": "https://neurips2023-enlsp.github.io/papers/paper_82.pdf", "len_cl100k_base": 9716, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 50776, "total-output-tokens": 14920, "length": "2e13", "weborganizer": {"__label__adult": 0.0008120536804199219, "__label__art_design": 0.0023746490478515625, "__label__crime_law": 0.00054168701171875, "__label__education_jobs": 0.064453125, "__label__entertainment": 0.00127410888671875, "__label__fashion_beauty": 0.000640869140625, "__label__finance_business": 0.0024967193603515625, "__label__food_dining": 0.0008063316345214844, "__label__games": 0.0029506683349609375, "__label__hardware": 0.0019550323486328125, "__label__health": 0.0006561279296875, "__label__history": 0.00093841552734375, "__label__home_hobbies": 0.0004558563232421875, "__label__industrial": 0.0009927749633789062, "__label__literature": 0.007778167724609375, "__label__politics": 0.0007805824279785156, "__label__religion": 0.001369476318359375, "__label__science_tech": 0.17578125, "__label__social_life": 0.0008716583251953125, "__label__software": 0.04071044921875, "__label__software_dev": 0.68994140625, "__label__sports_fitness": 0.0003371238708496094, "__label__transportation": 0.0006132125854492188, "__label__travel": 0.00034928321838378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53194, 0.05535]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53194, 0.22388]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53194, 0.81669]], "google_gemma-3-12b-it_contains_pii": [[0, 3215, false], [3215, 6058, null], [6058, 10203, null], [10203, 13393, null], [13393, 17593, null], [17593, 21646, null], [21646, 24784, null], [24784, 27837, null], [27837, 32390, null], [32390, 35498, null], [35498, 39409, null], [39409, 40807, null], [40807, 43486, null], [43486, 46773, null], [46773, 47351, null], [47351, 51303, null], [51303, 53194, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3215, true], [3215, 6058, null], [6058, 10203, null], [10203, 13393, null], [13393, 17593, null], [17593, 21646, null], [21646, 24784, null], [24784, 27837, null], [27837, 32390, null], [32390, 35498, null], [35498, 39409, null], [39409, 40807, null], [40807, 43486, null], [43486, 46773, null], [46773, 47351, null], [47351, 51303, null], [51303, 53194, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53194, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53194, null]], "pdf_page_numbers": [[0, 3215, 1], [3215, 6058, 2], [6058, 10203, 3], [10203, 13393, 4], [13393, 17593, 5], [17593, 21646, 6], [21646, 24784, 7], [24784, 27837, 8], [27837, 32390, 9], [32390, 35498, 10], [35498, 39409, 11], [39409, 40807, 12], [40807, 43486, 13], [43486, 46773, 14], [46773, 47351, 15], [47351, 51303, 16], [51303, 53194, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53194, 0.29583]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3054e6b4b45669309e71c898387f14e107b3ed4f
**GUI Testing Using Computer Vision** The MIT Faculty has made this article openly available. *Please share* how this access benefits you. Your story matters. <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>As Published</td> <td><a href="http://dx.doi.org/10.1145/1753326.1753555">http://dx.doi.org/10.1145/1753326.1753555</a></td> </tr> <tr> <td>Publisher</td> <td>Association for Computing Machinery (ACM)</td> </tr> <tr> <td>Version</td> <td>Author's final manuscript</td> </tr> <tr> <td>Citable Link</td> <td><a href="http://hdl.handle.net/1721.1/72684">http://hdl.handle.net/1721.1/72684</a></td> </tr> <tr> <td>Terms of Use</td> <td>Creative Commons Attribution-Noncommercial-Share Alike 3.0</td> </tr> <tr> <td>Detailed Terms</td> <td><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">http://creativecommons.org/licenses/by-nc-sa/3.0/</a></td> </tr> </tbody> </table> GUI Testing Using Computer Vision Tsung-Hsiang Chang MIT CSAIL vgod@mit.edu Tom Yeh UMIACS & HCIL University of Maryland tomyeh@umiacs.umd.edu Robert C. Miller MIT CSAIL rcm@mit.edu ABSTRACT Testing a GUI’s visual behavior typically requires human testers to interact with the GUI and to observe whether the expected results of interaction are presented. This paper presents a new approach to GUI testing using computer vision for testers to automate their tasks. Testers can write a visual test script that uses images to specify which GUI components to interact with and what visual feedback to be observed. Testers can also generate visual test scripts by demonstration. By recording both input events and screen images, it is possible to extract the images of components interacted with and the visual feedback seen by the demonstrator, and generate a visual test script automatically. We show that a variety of GUI behavior can be tested using this approach. Also, we show how this approach can facilitate good testing practices such as unit testing, regression testing, and test-driven development. Author Keywords GUI testing, GUI automation, test by demonstration ACM Classification Keywords H.5.2 Information Interfaces and Presentation: User Interfaces—Graphical user interfaces (GUI); D.2.5 Software Engineering: Testing and Debugging—Testing tools General Terms Algorithms, Design, Reliability INTRODUCTION Quality Assurance (QA) testers are critical to the development of a GUI application. Working closely with both programmers and designers, QA testers make efforts to ensure the GUI application is correctly implemented by the former following the design specification drawn by the latter. Without such efforts, there is no guarantee the usability promised by a good design is fully realized in the implementation. However, GUI testing is a labor intensive task. Consider the following GUI behavior defined in a design specification of a video player: click the button and it becomes . To test if this behavior is correctly implemented, a tester must look for the “play” button on the screen, click on it, and see if it is replaced by the “pause” button. Every time this behavior needs to be tested again, the tester must manually repeat the same task all over again. While GUI testers often toil in their tedious tasks, testers of non-GUI applications have been enjoying the convenience of tools to automate their tasks. For example, to test if the function call behaves correctly, a tester can write a script that makes this function call, followed by an assertion function call, such as , to check if the result is equal to 4 and report an error if not. This script can be run automatically as many times as desired, which greatly reduces the tester’s effort. In this paper, we present Sikuli Test, a new approach to GUI testing that uses computer vision to help GUI testers automate their tasks. Sikuli Test enables GUI testers to write visual scripts using images to define what GUI widgets to be tested and what visual feedback to be observed. For example, to automate the task of testing the behavior of the video player described above, a tester can write the following script: When this script is executed, it will act like a robotic tester with eyes to look for the “play” button on the screen, click on it, and see if it is replaced by the “pause” button, as if the human tester is operating and observing the GUI himself or herself (Figure 1). We make the following contributions in this paper: **Interview study with GUI testers** We examine the limitations of current testing tools and suggest design requirements for a new testing framework. **Automation of visual assertion** Based on the visual automation API provided by Sikuli Script [18], a set of visual assertion API is added to determine if expected outputs are shown or not. The extension of visual assertion fulfills the automation of GUI testing by using images for verifying outputs in addition to directing inputs. **Test-By-Demonstration** Testers can interact with a GUI and record the actions they perform and visual feedback they see. Test scripts can be automatically generated to reproduce the actions and verify the visual feedback for testing purposes. **Support of good testing practices** Features are introduced to support good testing practices including unit testing, regression testing, and test driven development. **Comprehensive evaluation** We analyze the testability of a wide range of visual behavior based on five actual GUI applications. Also, we examine the reusability of test scripts based on two actual GUI applications evolving over many versions. --- **INTERVIEW STUDY** To guide the design and development of our new GUI testing tool, we conducted informal interviews with four professionals of GUI testing from academia and industry. Questions asked during the interviews were centered on three topics: current testing practices, use of existing tools, and experience with existing tools. In terms of testing practices, we found most of our subjects are involved in the early design process to coordinate and formulate workable test plans to ensure quality and testability. Testing is performed frequently (often daily) on the core components. For example, underlying APIs are tested with simulated inputs and checked if they produce expected outputs. But testing the outward behavior of GUls is less frequent, usually on major milestones by a lot of human testers. Some of them regularly apply good testing practices such as unit testing, regression testing, and test-driven development; but the scope of these practices is limited to the parts without GUI. In terms of the use of testing tools, some have developed customized automation tools. They write scripts that refer to GUI objects by pre-programmed names or by locations to simulate user interactions with these objects. Some have been using existing tools such as Autoit [1], a BASIC-like scripting language designed to automate user interactions for Windows GUI applications. In terms of experience with these tools, our subjects expressed frustration and described their experience as sometimes “painful”, “slow”, and “too much manual work.” Several problems with current automatic testing tools were identified by the subjects, which might explain this frustration. First, whenever the GUI design is modified and the positions of GUI components are rearranged, automatic tools based on the absolute position of components often fail and would actually “slow down the testing process” because of the need to modify the test scripts. Second, while automatic tools based on component naming may avoid this problem, many components simply can not or have not been named. Based on the findings of this interview, we identified the following five design goals to guide the design and development of our new GUI testing tool: - (G1) The tool should allow testers to write scripts to automate tests. - (G2) The tool should not require testers to refer GUI components by names or by locations. - (G3) The tool should minimize the instances when test scripts need to be modified due to design changes. - (G4) The tool should minimize the effort of writing test scripts. - (G5) The tool should support good testing practices such as unit testing, regression testing, and test-driven development. **TESTING BY VISUAL AUTOMATION** We present Sikuli Test, a testing framework based on computer vision that enables developers and QA testers to automate GUI testing tasks. Consider the following task description for testing a particular GUI feature: Click on the color palette button. Check if the color picking dialog appears. To carry out this test case, QA testers need to manually interact with the GUI and visually check if the outcome is correct. Using Sikuli Test, the testers can automate this process by converting the task description into an automation script. This script consists of action statements to simulate the interactions and assertion statements to visually verify the outcomes of these interactions. For example, the above task description can be easily translated into a test script as: ``` click( ); assertExist( ); ``` By taking this image-based scripting approach, Sikuli Test meets the first three design goals: it allows testers to write visual scripts to automate tests (G1), to refer to GUI objects by their visual representation directly (G2), and to provide robustness to changes in spatial arrangements of GUI components (G3). The details of how to write test scripts using action statements and assertion statements are given next. **Simulating Interactions using Action Statements** To simulate interactions involved in a test case, QA testers can write action statements using the API defined in Sikuli Sikuli Test interface consists of a test script editor and an information panel summarizing the test result. Script [18]. Sikuli Script is a visual automation system that provides a library of functions to automate user inputs such as mouse clicks and keystrokes. These functions take screenshots of GUI components as arguments. Given the image of a component, Sikuli Script searches the whole screen for the component to deliver the actions. For example, - **click** on the close button, - **dragDrop** a word document to the trash, - **type** “CHI” in a search box. Since Sikuli Script is based on a full scripting language, Python, it is possible for QA testers to programmatically simulate a large variety of user interactions, simple or complex. Verifying Outcomes using Visual Assertion Statements Sikuli Test introduces two visual assertion functions. QA testers can include these functions in a test script to verify whether certain GUI interaction generates the desired visual feedback. These two assertion functions are: - **assertExist**(image or string [, region]) asserts that an image or string that should appear on screen or in a specific screen region - **assertNotExist**(image or string [, region]) asserts that an image or a string should not appear on screen or in a specific screen region The image is specified as URL or a path to an image file. It also can be captured by a screenshot tool provided in our Integrated Development Environment (IDE). When a string is specified, OCR (Optical Character Recognition) is performed to check if the specified string can be found in the screen region. The optional parameter region is specified as a rectangular area on the screen (i.e., x, y, width, height). If not specified, the entire screen is checked. Alternatively, the region can be specified as a second image, in which case the entire screen is searched for that image and the matching region is searched for the first image. Spatial operators such as inside, outside, right, bottom, left, and top can be further applied to a region object to derive other regions in a relative manner. **Examples** We present examples to illustrate how test scripts can be written to verify visual feedback. 1. Appearance ```python type(":p") assertExist(:p) ``` In some instant messengers, textual emoticons, e.g. smiley face :), are replaced by graphical representations automatically. This example shows how to test the appearance of the corresponding graphical face once the textual emoticon is entered in Windows Live Messenger. 2. Disappearance ```python blueArea = find([-5] [0]) closeButton = click(closeButton) assertNotExist(closeButton, blueArea) assertNotExist("5", blueArea) ``` In this example, the close button is expected to clear the content of the text box as well as itself. Suppose the GUI is already in a state that contains a "5", at first we find the blue text box on the screen and store the matched region that has the highest similarity in `blueArea`. Then, after clicking the close button, two `assertNotExist` are used to verify the disappearance in the blue area. TESTING BY DEMONSTRATION Sikuli Test provides a record-playback utility that enables QA testers to automate GUI testing by demonstration. The operation of a GUI can be described as a cycle consisting of actions and feedback. Given a test case, the testers follow the given actions to operate a GUI, and verify if the visual feedback is the same as expected. If so, they proceed to do the next actions and to verify further feedback. With the record-playback mechanism, the testers can demonstrate the interactions involved in the test case. The actions as well as the screen are recorded and translated into a sequence of action and assertion statements automatically. The action statements, when being executed, can replicate the actions, as if the testers are operating the GUI themselves. Besides, the assertion statements can verify if the automated interactions lead to the desired visual feedback, as if the testers are looking at the screen themselves. The test-by-demonstration capability of Sikuli Script satisfies the design goal of minimizing the effort needed to write test scripts (G4). Details of how demonstration is recorded and how actions and assertions are automatically generated from the recorded demonstration will be given next. Recording Demonstration As QA testers demonstrate a test case, a recorder is running in the background to capture the actions they perform and the visual feedback they see. To capture actions, the recorder hooks into the global event queue of the operating system to listen for input events related to the mouse and the keyboard. The list of mouse events recorded includes mouse_down, mouse_up, mouse_move, and mouse_drag. Each mouse event is stored with the cursor location (x, y) and the state of buttons. The keyboard events recorded are includes key_down and key_up, stored together with key codes. All events include a timestamp that is used to synchronize with the screen recording. To capture screens, the recorder grabs the screenshot of the entire screen from the video buffer in the operating system periodically. In our prototype, the recording can be done at 5 fps at a resolution of 1280x800 on a machine with 2Ghz CPU and 2GB memory. Generating Action Statements Given a series of screen images and input events captured by the recorder, action statements can be generated to replay the interactions demonstrated by the testers. For example, a single mouse click recorded at time t at location (x, y) can be directly mapped to click(I) where I is the image of the GUI component that was clicked. The image I can be obtained by cropping a region around (x, y) from the screen image captured at time t−1 right before the click event. In our current implementation, a constant-size (80x50) region around the input location is cropped to represent the target GUI component receiving the input. Even though the region may not necessarily fit the target component perfectly, often it contains enough pixels to uniquely identify the component on the screen. If ambiguity arises, the user can adjust the cropping area to include more pixels of the component or the context to resolve the ambiguity at any time. Since Sikuli Test is independent of any GUI platform, it also can be used to test mobile applications running on an emulator. This example shows how to test scrolling and movement on an Android emulator. This test case works by comparing the position of the target before and after an action that should move the target. After clicking on the left button, we expect the series of images to scroll rightward. Therefore, the new x coordinate should be larger than the old one. We choose the image of sunset to be the target. Its x coordinate that derived from the most similar match of find() is stored in old_x. After the clicking on the left button, its new x coordinate derived from find() again is used to be compared with the old_x for verifying the correctness of the implementation. To execute an action statement, the automation engine visually identifies the target GUI component’s current location \((x', y')\) by searching the current screen for an image region matching the target image \(I\). To find a given pattern, we apply the template matching technique with the normalized correlation coefficient implemented in OpenCV in our current system [18]. This technique treats the pattern as a template and compares the template to each region with the same size in an input image to find the region most similar to the template. Then, the click event is delivered to the center of the matched region to simulate the desired user interaction. Some input events may need to be grouped into a single action statement. For example, two consecutive mouse clicks in a short span of time is mapped to \(\text{doubleClick}()\). Keyboard typing events can be clustered to form a string and mapped to \(\text{type}(\text{string})\). A \(\text{mouse\_down}\) event at one location followed by a \(\text{mouse\_up}\) event at another location can be mapped to \(\text{dragDrop}(I,J)\) where \(I\) and \(J\) denote the images extracted from the locations of the two mouse events respectively. Generating Assertion Statements Assertion statements can also be automatically derived from the screen images captured during the demonstration. We developed and implemented a simple vision algorithm to accomplish this. We assume any salient change between the two images is very likely to be the visual feedback caused by an input event. Our algorithm compares the screen images \(I_t\) and \(I_{t+1}\) where \(t\) is the time of a recorded input event, and identifies pixels that are visually different. It then clusters the changed pixels in close proximity and merges them into the same group. Each group of pixels would probably correspond to the same GUI component. Finally, it computes a bounding rectangle around each group and obtains a cropped image containing the visual feedback of each GUI component visually affected by the input event. An assertion statement that can be later used to check the presence of the visual feedback can be generated with this algorithm. Figure 3 shows an example of deriving the visual feedback where a drop-down box is opened by clicking. Often, more than one GUI component can exhibit visual feedback as the result of a single input event. In our case, our algorithm results in a compound assertion statement including multiple cropped image regions. For example, Figure 4 shows a dialog box with a checkbox that can be used to enable several GUI components at once. Checking this checkbox will cause all previously greyed out components in a panel to regain their vivid colors. An optional step for the tester to increase the reliability of the automatic visual feedback detector is to provide hints to where it should look for the visual feedback. After performing an interaction and before moving on to the next, the tester can move the mouse cursor to the area where the visual feedback has occurred and press a special key, say F5, to trigger a hint. The detector can use the location of the cursor to extract the relevant visual feedback more reliably and generates an appropriate assertion statement. While we can identify many cases in which visual assertion statements can be created automatically in this manner, there remain a few challenges. First, periodic changes in the desktop background, such as those related to the system clock or the wireless signal indicator, may be inadvertently detected but irrelevant to the GUI to be tested. One solution would be to ask the testers to specify the boundary of the GUI beforehand so that background noises can be filtered out. Second, certain actions might take longer to obtain any visual feedback; the screen image captured immediately after the action might not contain the visual feedback. One solution would be to wait until a significant change is detected. Third, some visual feedback may involve animation spanning several frames, for example, a large window appearing in a blind-rolling-down fashion. One solution would be to wait until the screen has stabilized and focus only on the final visual feedback. However, while it is possible to test the final feedback, testing the intermediate steps of an animation can still be unreliable, because it is difficult to synchronize between the frames sampled during the demonstration time and those sampled during the test time. SUPPORTING GOOD TESTING PRACTICES Sikuli Test comes with a set of features to help GUI developers and QA testers engage in good testing practices such as unit testing, regression testing, and test-driven development, satisfying the last design goal (G5). Unit Testing When a GUI is complex, to make sure it is tested thoroughly requires a systematic approach. One such approach is to break the GUI down into manageable units, each of which targets a particular part, feature, or scenario. This approach is known as unit testing. To support unit testing for GUI, Sikuli Test draws many design inspirations from JUnit, a popular unit testing framework for Java programming: 1. Testers can define each test as a function written in Python. Every test function is meant to be run independently with- out relying on the side-effects of another test function. For example, after testing the exit button, which has the side effect of closing the application, no more tests can be run unless the GUI is restarted. Therefore, to run every test independently, Sikuli Test provides two functions `setUp()` and `tearDown()` that can be overridden by testers to set up and to clean up the testing environment. A typical way to achieve the independence is always starting the GUI in a fresh configuration before running a test. 2. Testers can define common action functions to automatically advance the GUI to a particular state in order to run certain tests only relevant in that state. Common action functions can be shared among all test cases in the same script to reduce redundant code and to prevent future inconsistency. For example, suppose the `Save Dialog` box is relevant to several test cases, the tester can write a common action function to open `Save Dialog` that contains a `click()` on the `File` menu followed by another `click()` on the `Save` item. On the other hand, testers can also define shared assertion functions to verify the same visual feedback that are derived from different actions. For example, the appearance of a `save` dialog box can be caused by a hotkey Ctrl-S, by a icon on the toolbar, or by the Save item in the `File` menu; all could be verified by `assertSaveDialog()`. 3. Testers can run a test script and monitor the progress as each test function in the script is run. They can see the summary showing whether each test has succeeded or failed as well as the total number of successes and failures. 4. When errors are found, testers can communicate the errors to programmers effectively. On the one hand, testers are encouraged to assign each test function a meaningful name, such as `test_click_play_button`. On the other hand, the images embedded in each function make it visually clear which GUI components and what visual feedback are involved in the errors. Regression Testing When a new feature is implemented, in addition to verifying whether the implementation is correct, it is equally important to ensure that it does not break any existing feature that used to be working. This practice is often known as regression testing in software engineering. Many software projects use daily builds to automatically check out and compile the latest development version from the version control system. The daily build is tested by automated unit testing suites to validate the basic functionality. However, because of the weaknesses of automatic testing tools for GUI, current regression testing process is limited to work only on internal components but not on GUI. Therefore, regression testing becomes a tedious practice that requires QA testers to manually repeat the same set of tests whenever there is a modification to the GUI. Sikuli Test is a labor-saving and time-saving tool enabling QA testers to automate regression testing. Using Sikuli Test, the testers only need to program test cases once and those test cases can be repeatedly applied to check the integrity of the GUI. To show the feasibility of Sikuli Test for supporting regression testing, an evaluation will be given later. Test-Driven Development While our testing framework is originally designed for QA testers, it can be used by both GUI designers and programmers during the development process. In large GUI projects where the separation between design and implementation is clearer, designers can create test cases based on design illustrations or high-fidelity prototypes. For example, a designer can use a graphic editor such as Photoshop to create a picture illustrating the GUI’s desired visual appearance. Based on this picture, the designer can crop representative images of operable GUI components such as buttons to compose action statements. The designer can also graphically illustrate the expected visual feedback when these GUI components are operated. Again, this graphical illustration can be used directly in assertion statements. Test cases can be created and handed to programmers to implement the GUI’s outward visual behavior. These test cases will initially fail because none of the desired visual behavior has been implemented yet. As more features are implemented, more test cases can be passed. When all the test cases are passed, the implementation is not only complete but also thoroughly tested. This practice is often known as test-driven development, which has been widely adopted by non-GUI development projects. Our visual testing framework initiates an opportunity for GUI designers and programmers to engage in this good practice of software engineering. Even in small projects when a programmer often doubles as a designer and a tester, test-driven development can still be practiced. For example, given a design specification, a program can create the skin of a GUI without any functionality using a Rapid Application Development (RAD) tool. Then, before the actual implementation, the programmer can take the screenshots of the skin to write test cases and start writing GUI code to pass these test cases. We performed testability analysis on a diverse set of visual behavior GUI testers can test automatically, and reusability analysis—how likely testers can reuse a test script as a GUI evolves. **Testability Analysis** We performed testability analysis on a diverse set of visual behavior. Each visual behavior can be defined as a pairing of a GUI widget and a visual effect rendered on it. We considered 27 common widgets (e.g., button, check box, slider, etc.) and 25 visual effects (e.g., appearance, highlight, focus, etc.). Out of the 675 possible pairings, we identified 368 to be valid, excluding those that are improbable (e.g., scrollbar + font changing). We began the analysis by applying Sikuli Test to test the visual behavior exhibited by four real GUI applications (i.e., 1: Capivara, 2: jEdit, 3: DrJava, and 4: System Preferences on Mac OS X). Table 1 summarizes the result of the testability analysis. Each cell corresponds to a visual behavior. Out of 368 valid visual behaviors, 139 (indicated by the number of the application used to be tested) are empirically testable, visual behavior was found in the four applications and could be tested; 181 (indicated by a triangle △) are theoretically testable, visual behavior was not found in the four applications but could be inferred from the testability of other similar visual behavior; and 48 (indicated by an “F”) are not testable. In addition to these valid visual behaviors, there are 307 rarely paired improbable visual behaviors indicated by an “X”. As can be seen, the majority of the valid visual behavior considered in this analysis can be tested by Sikuli Test. However, complex visual behavior such as those involving animations (i.e., fading, animation) are currently not testable, which is a topic for future work. **Reusability Analysis** We performed reusability analysis of test scripts based on two real GUI applications: Capivara, a file synchronization tool, and jEdit, a rich-text editor. These two applications were selected from SourceForge.net with two criteria: it must have GUI, and it must have at least 5 major releases available for download. First, we focused on the two earliest versions that can be downloaded of the two applications. For Capivara, we chose versions 0.5.1 (Apr. '05) and 0.6 (June '05) (Figure 5 A,B). For jEdit, we chose versions 2.3 (Mar. '00) and 2.41 (Apr. '00) (Figure 5 A,B). Since there were modifications to the user interface between these two versions, we were interested in whether test cases written for the first version can be applied to the second version to test the unmodified parts of the application. We created 10 and 13 test cases for Capivara and jEdit respectively. Most of the test cases were created using the test-by-demonstration tool, while some required manual adjustments such as giving hints and removing excess contexts from the detected visual feedback. Table 2 summarizes our findings. These two tables include the first two versions, plus a later version that showed drastic change in the GUI for Capivara and jEdit respectively. The column of the first version shows how each test case is made: A denotes automatically generated, AM denotes automatically generated with some modifications, and M denotes manually written. Each column of the other two versions shows the result of each test case at the version: P denotes passed, whereas F1 - F5 denote failure. (The cause of each failure will be explained later.) Between the first two versions of Capivara, we observed one modification: the size limitation of the panel splitter was different. Thus, we only needed to update 1 of the 10 original test cases to reflect this modification. In other words, we were able to apply the other 9 test cases against the second version to test the correctness of unmodified features. Similarly, in the case of jEdit, we observed 3 modifications among the features covered by the original 13 test cases. Again, we were able to apply the remaining 10 test cases against the second version. Next, we examined the long-term reusability of test cases as the applications undergo multiple design changes. For Capivara, we considered two additional major versions: 0.7.0 (Aug. '05) and 0.8.0 (Sep. '06), whereas for jEdit, we considered five more: 2.5.1 (Jul. '00), 2.6final (Nov. '00), 3.0.1 (Jan. '01), 3.1 (Apr. '01), and 3.2.1 (Sep. '01). We tested whether each of the original test cases was still reusable to test the later versions and for those no longer reusable, identified the causes. Figure 6 summarizes our findings. To show the reusability of the test cases, we arranged each version across the horizontal axis. For each version, the height of the baseline region (blue) indicates the number of the original test cases still being reusable for that version. This region exhibits a downward slope toward the direction of the newer versions, reflecting the fact that fewer and fewer of the original test cases remained applicable. The sharpest drop-off can be observed at version 0.8.0 for Capivara (Figure 5.c) and at 2.6final for jEdit (Figure 5.C), which can be attributed to the change of major design in these versions. The lesson that can be drawn from this observation is that as long as the design of a GUI evolve incrementally, as often the case, a significant number of test cases can be reusable, which is important for supporting regression testing. Also, we identified five major causes for a test case to become unusable: (F1) change in the visual style, e.g., skin, size, font, etc.; (F2) removal of the action component; (F3) removal of the expected visual feedback; (F4) change in the surrounding of the target components; and (F5) change in internal behavior. Each cause of test failures is represented in the figure as one of the colored regions above the baseline region, with its height indicating the number of unusable test cases attributed to it. As can be expected, the most dominant cause is change in visual style (F1, orange), since our testing framework is largely driven by high-level visual cues. One surprising observation is an unusual upward slope of F2 occurred at jEdit 2.5.1, indicating that test cases that were not reusable in the previous version became reusable. Upon close examination, we found that toolbar icons were removed at 2.4.1 but reintroduced at 2.5.1, making the test cases targeting toolbar icons reusable again. While such reversal of GUI design is rare in practice, when it does happen, Sikuli Test is able to capture it. RELATED WORK Testing is important for ensuring and improving GUI usability. Many research efforts have been made in the HCI community to study and develop tools for this critical task. Boshernitsan et al. [2] studied software developers’ need to modify their programs to keep up with changing requirements and designs and developed a visual tool to simplify code transformation. Subrahmaniyan et al. [15] examined the testing and debugging strategies of end-user programmers and found testing and code-inspection are the two most common strategies. To support these strategies, Ko and Myers developed Whyline for the Alice platform [5] and further extended it to general Java GUI programming [6]. Using Whyline, a GUI programmer can make a visual recording of an interactive session with a GUI and watch this recording to catch incorrectly programmed visual feedback (testing). Whyline is able to intelligently suggest questions seeking to explain problematic visual feedback (e.g., why did the color of this button change to blue?). Such questions can help the programmer quickly lookup and fix the lines of code responsible for the problem (code-inspection). Sikuli Test can complement Whyline in that it can watch the recording on behalf of the programmer to catch errors and suggest appropriate questions. A GUI needs to be interacted with in order to be tested. Several research works have focused on how to automate such interaction by demonstration, a paradigm known as Programming By Demonstration (PBD). As early as early 90’s, Singh et al [12] proposed the Sage system that can capture and store GUI interactions demonstrated by users as reusable templates. Wilcox et al. [16] illustrated the value of visual feedback in programming by demonstration tools especially during the testing process, a finding validates the design decision of Sikuli Test to embed visual feedback directly in test scripts. Given the popularity of Web-based applications, the Koala system by Little et al. [8] and the CoScripter system by Leshed et al. [7] both aim to enable Web users to capture, Test Cases of Capivara <table> <thead> <tr> <th>Test Case</th> <th>(1st)</th> <th>(2nd)</th> <th>(4th)</th> </tr> </thead> <tbody> <tr> <td>connection-setting-cancel</td> <td>A</td> <td>P</td> <td>P</td> </tr> <tr> <td>connection-setting-ok</td> <td>A</td> <td>P</td> <td>P</td> </tr> <tr> <td>new-host-in-favorites</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>text-changed-in-status-and-tab</td> <td>A</td> <td>P</td> <td>F1</td> </tr> <tr> <td>menu-exit-dialog</td> <td>AM</td> <td>P</td> <td>F2</td> </tr> <tr> <td>toolbar-sync-dialog</td> <td>A</td> <td>P</td> <td>P</td> </tr> <tr> <td>name-size-column-in-listbox</td> <td>A</td> <td>P</td> <td>F1</td> </tr> <tr> <td>menu-options-tree</td> <td>AM</td> <td>P</td> <td>F4</td> </tr> <tr> <td>enabled-disabled-buttons</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>splitter-resize</td> <td>M</td> <td>F3</td> <td>F3</td> </tr> </tbody> </table> Test Cases of jEdit <table> <thead> <tr> <th>Test Case</th> <th>(1st)</th> <th>(2nd)</th> <th>(4th)</th> </tr> </thead> <tbody> <tr> <td>textarea-add-del-by-key</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>textarea-add-del-by-menu</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>new-tab-by-key</td> <td>A</td> <td>P</td> <td>P</td> </tr> <tr> <td>new-tab-by-menu</td> <td>AM</td> <td>P</td> <td>P</td> </tr> <tr> <td>new-tab-by-toolbar</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>find-by-key</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>find-by-menu</td> <td>AM</td> <td>P</td> <td>F1</td> </tr> <tr> <td>textfield-on-toolbar</td> <td>AM</td> <td>P</td> <td>F2</td> </tr> <tr> <td>toolbar-print-dialog</td> <td>AM</td> <td>F5</td> <td>F3</td> </tr> <tr> <td>menu-submenu</td> <td>AM</td> <td>P</td> <td>P</td> </tr> <tr> <td>scroll-textarea</td> <td>M</td> <td>P</td> <td>F1</td> </tr> <tr> <td>quit-cancel</td> <td>A</td> <td>P</td> <td>F1</td> </tr> </tbody> </table> Table 2. Test cases created for the first version automatically (A), semi-automatically (AM) or manually (M) and their reusability (Pass or Fail) in subsequent versions (2nd and 4th). In the software engineering literature, many works have dealt with the issue of GUI testing from which many lessons and inspirations can be drawn. Xie and Memon [17] surveyed a large number of GUI testing frameworks and identified four common approaches: (1) writing scripts manually, (2) generating scripts by record-playback, (3) checking results with assertions, and (4) testing internal functionalities only. Sikuli Test supports the first three approaches by focusing on outward visual feedback. Memon [9] further pointed out that automation is important not only for simulating user interactions but also for verification of the results. To automate interactions, Kasik and George [4] built a tool that uses genetic algorithms to automatically generate test scripts to act like novice-user in an unpredictable yet controlled manner. Ostrand et al [11] developed a tool that can capture and replay interactions demonstrated by testers. This tool represents captured actions in a flow-chart that can be edited and rearranged. Similarly, Sikuli Test represents captured interactions as visual action statements that can be edited at will. To automate verification, Memon et al. [10] developed the GUI Ripper tool that can explore a GUI and extract internal properties of GUI components to generate assertion statements meant to check those properties. However, most existing tools similar to GUI Ripper check results by inspecting the internal properties of GUI components through platform-specific APIs (e.g., Java APIs). Sikuli Test seeks to eliminate this platform dependency by inspecting the visual appearance directly. Commercial GUI testing tools are also available to help QA testers perform their tasks, such as WinRunner [3]. However, most tools only allow the testers to script user interaction and direct the interaction by absolute positions. There is still a need for QA testers to verify the outcome of the interaction manually and to modify test scripts whenever certain GUI components are repositioned due to design changes. Sikuli Test seeks to eliminate this need by using visual cues to automate both the interaction and verification. CAPBAK [13] is a rare exception of a tool that seeks to automate visual verification. It captures the image of the entire GUI after each user activity so that this image can be used later to test if the GUI still looks exactly the same as before (i.e., regression testing). However, if the test fails, CAPBAK is unable to localize the error. In contrast, by detecting the visual changes before and after each activity, Sikuli Test can help the testers to pinpoint the problematic visual effect that requires the programmer’s attention. SUMMARY AND CONCLUSION We presented Sikuli Test, a new approach to GUI testing using computer vision. Besides meeting the five design goals identified in an interview study with GUI testers, Sikuli Test offers three additional advantages: 1. **Readability of test cases**: The semantic gap between the test scripts and the test tasks automated by the scripts is small. It is easy to read a test script and understand what GUI feature the script is designed to test. 2. **Platform independence**: Regardless of the platform a GUI application is developed on, Sikuli Test can be used to test the GUI’s visual feedback. We have shown the examples of test scripts written to test the visual feedback of GUI applications on Windows and Mac OS X, as well as Web applications in a browser and mobile applications in an Android emulator. Even though Sikuli Test is not designed to let users write scripts once and use them across multiple platforms, it is still possible to do so as long as the appearance of the applications looks the same. 3. **Separation of design and implementation**: Test cases can be generated by designers and handed to programmers to implement features that must pass the test cases, to eliminate the biases that may arise when programmers are asked to test their own implementation. However, Sikuli Test currently has two major limitations that can be improved upon in the future. First, while Sikuli Test can assert what visual feedback is expected to appear or to disappear, it is unable to detect unexpected visual feedback. For example, if a programmer accidentally places a random image in a blank area, it is an undetectable error since no one would have anticipated the need to test that area with assertions. One solution would be to run the visual feedback detector at the test time to see if there is any detected visual feedback not covered by an assertion statement. Second, Sikuli Test is designed to test a GUI’s outward visual feedback and is thus unable to test the GUI’s internal functionalities. For example, while Sikuli Test can check if a visual feedback is correctly provided to the user who clicks the save button, it does not know if the file is indeed saved. One solution would be to treat Sikuli Test not as a replacement of but a complement to an existing testing tool. Together they make sure both the outward feedback and inward functionalities of a GUI can be sufficiently tested, a task neither can accomplish alone. ACKNOWLEDGMENTS We thank the anonymous reviewers and UID group for great suggestions and feedback. This work was supported in part by the National Science Foundation under award number IIS-0447800 and by Quanta Computer as part of the TParty project. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsors. REFERENCES
{"Source-Url": "http://dspace.mit.edu/openaccess-disseminate/1721.1/72684", "len_cl100k_base": 9250, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36585, "total-output-tokens": 10660, "length": "2e13", "weborganizer": {"__label__adult": 0.00029730796813964844, "__label__art_design": 0.0005693435668945312, "__label__crime_law": 0.0002124309539794922, "__label__education_jobs": 0.0014438629150390625, "__label__entertainment": 7.021427154541016e-05, "__label__fashion_beauty": 0.00015413761138916016, "__label__finance_business": 0.00012314319610595703, "__label__food_dining": 0.0002448558807373047, "__label__games": 0.0005807876586914062, "__label__hardware": 0.0008301734924316406, "__label__health": 0.0003333091735839844, "__label__history": 0.00023627281188964844, "__label__home_hobbies": 7.599592208862305e-05, "__label__industrial": 0.00025653839111328125, "__label__literature": 0.00024771690368652344, "__label__politics": 0.00015532970428466797, "__label__religion": 0.00037550926208496094, "__label__science_tech": 0.0159912109375, "__label__social_life": 8.64267349243164e-05, "__label__software": 0.0082855224609375, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00023627281188964844, "__label__transportation": 0.0003516674041748047, "__label__travel": 0.00018393993377685547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47922, 0.01428]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47922, 0.67374]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47922, 0.89575]], "google_gemma-3-12b-it_contains_pii": [[0, 2138, false], [2138, 5622, null], [5622, 10984, null], [10984, 14114, null], [14114, 18079, null], [18079, 23375, null], [23375, 28539, null], [28539, 33607, null], [33607, 37163, null], [37163, 41489, null], [41489, 47922, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2138, true], [2138, 5622, null], [5622, 10984, null], [10984, 14114, null], [14114, 18079, null], [18079, 23375, null], [23375, 28539, null], [28539, 33607, null], [33607, 37163, null], [37163, 41489, null], [41489, 47922, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47922, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47922, null]], "pdf_page_numbers": [[0, 2138, 1], [2138, 5622, 2], [5622, 10984, 3], [10984, 14114, 4], [14114, 18079, 5], [18079, 23375, 6], [23375, 28539, 7], [28539, 33607, 8], [33607, 37163, 9], [37163, 41489, 10], [41489, 47922, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47922, 0.17327]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
10ef33a1179607727009c66d864cc29e785b4f3f
[REMOVED]
{"Source-Url": "http://mrg.doc.ic.ac.uk/publications/practical-interruptible-conversations-distributed-dynamic-verification-with-session-types-and-python/HNYDH13.pdf", "len_cl100k_base": 10125, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 48424, "total-output-tokens": 12342, "length": "2e13", "weborganizer": {"__label__adult": 0.0003402233123779297, "__label__art_design": 0.00031876564025878906, "__label__crime_law": 0.0003535747528076172, "__label__education_jobs": 0.0005164146423339844, "__label__entertainment": 8.147954940795898e-05, "__label__fashion_beauty": 0.0001556873321533203, "__label__finance_business": 0.0002143383026123047, "__label__food_dining": 0.00034308433532714844, "__label__games": 0.0005168914794921875, "__label__hardware": 0.001064300537109375, "__label__health": 0.0006327629089355469, "__label__history": 0.00029921531677246094, "__label__home_hobbies": 8.71419906616211e-05, "__label__industrial": 0.0004634857177734375, "__label__literature": 0.0002925395965576172, "__label__politics": 0.00034332275390625, "__label__religion": 0.00047850608825683594, "__label__science_tech": 0.059539794921875, "__label__social_life": 0.00011038780212402344, "__label__software": 0.00865936279296875, "__label__software_dev": 0.923828125, "__label__sports_fitness": 0.0003304481506347656, "__label__transportation": 0.0006814002990722656, "__label__travel": 0.00023043155670166016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53706, 0.02696]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53706, 0.22022]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53706, 0.87938]], "google_gemma-3-12b-it_contains_pii": [[0, 2992, false], [2992, 5397, null], [5397, 9180, null], [9180, 12426, null], [12426, 14281, null], [14281, 17407, null], [17407, 20050, null], [20050, 23593, null], [23593, 27380, null], [27380, 29719, null], [29719, 33146, null], [33146, 35261, null], [35261, 38068, null], [38068, 40005, null], [40005, 44163, null], [44163, 47891, null], [47891, 51127, null], [51127, 53706, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2992, true], [2992, 5397, null], [5397, 9180, null], [9180, 12426, null], [12426, 14281, null], [14281, 17407, null], [17407, 20050, null], [20050, 23593, null], [23593, 27380, null], [27380, 29719, null], [29719, 33146, null], [33146, 35261, null], [35261, 38068, null], [38068, 40005, null], [40005, 44163, null], [44163, 47891, null], [47891, 51127, null], [51127, 53706, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53706, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53706, null]], "pdf_page_numbers": [[0, 2992, 1], [2992, 5397, 2], [5397, 9180, 3], [9180, 12426, 4], [12426, 14281, 5], [14281, 17407, 6], [17407, 20050, 7], [20050, 23593, 8], [23593, 27380, 9], [27380, 29719, 10], [29719, 33146, 11], [33146, 35261, 12], [35261, 38068, 13], [38068, 40005, 14], [40005, 44163, 15], [44163, 47891, 16], [47891, 51127, 17], [51127, 53706, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53706, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
6bffe24d59bf364e3afad26cacd686f5eedb70d0
Abstract Programmers have come to embrace dynamically-typed languages for prototyping and delivering large and complex systems. When it comes to maintaining and evolving these systems, the lack of explicit static typing becomes a bottleneck. In response, researchers have explored the idea of gradually-typed programming languages which allow the incremental addition of type annotations to software written in one of these untyped languages. Some of these new, hybrid languages insert run-time checks at the boundary between typed and untyped code to establish type soundness for the overall system. With sound gradual typing, programmers can rely on the language implementation to provide meaningful error messages when type invariants are violated. While most research on sound gradual typing remains theoretical, the few emerging implementations suffer from performance overheads due to these checks. None of the publications on this topic comes with a comprehensive performance evaluation. Worse, a few report disastrous numbers. In response, this paper proposes a method for evaluating the performance of gradually-typed programming languages. The method hinges on exploring the space of partial conversions from untyped to typed. For each benchmark, the performance of the different versions is reported in a synthetic metric that associates runtime overhead to conversion effort. The paper reports on the results of applying the method to Typed Racket, a mature implementation of sound gradual typing, using a suite of real-world programs of various sizes and complexities. Based on these results the paper concludes that, given the current state of implementation technologies, sound gradual typing faces significant challenges. Conversely, it raises the question of how implementations could reduce the overheads associated with soundness and how tools could be used to steer programmers clear from pathological cases. Categories and Subject Descriptors D.2.8 [Software Engineering]: Metrics—Performance measures Keywords Gradual typing, performance evaluation 1. Gradual Typing and Performance Over the past couple of decades dynamically-typed languages have become a staple of the software engineering world. Programmers use these languages to build all kinds of software systems. In many cases, the systems start as innocent prototypes. Soon enough, though, they grow into complex, multi-module programs, at which point the engineers realize that they are facing a maintenance nightmare, mostly due to the lack of reliable type information. Gradual typing [21, 26] proposes a language-based solution to this pressing software engineering problem. The idea is to extend the language so that programmers can incrementally equip programs with types. In contrast to optional typing, gradual typing provide programmers with soundness guarantees. Realizing type soundness in this world requires run-time checks that watch out for potential impedance mismatches between the typed and untyped portions of the programs. The granularity of these checks determine the performance overhead of gradual typing. To reduce the frequency of checks, macro-level gradual typing forces programmers to annotate entire modules with types and relies on behavioral contracts [12] between typed and untyped modules to enforce soundness. In contrast, micro-level gradual typing instead assigns an implicit type Dyn [1] to all unannotated parts of a program; type annotations can then be added to any declaration. The implementation must insert casts at the appropriate points in the code. Different language designs use slightly different semantics with different associated costs and limitations. Both approaches to gradual typing come with two implicit claims. First, the type systems accommodate common untyped programming idioms. This allows programmers to add types with minimal changes to existing code. Second, the cost of soundness is tolerable, meaning programs remain performant even as programmers add type annotations. Ideally, types should improve performance as they provide invariants that an optimizing compiler can leverage. While almost every publication on gradual typing validates some version of the first claim, no projects tackle the second claim systematically. Most publications come with qualified remarks about the performance of partially typed programs. Some plainly admit that such mixed programs may suffer performance degradations of up to two orders of magnitude [18, 25, 28]. This paper presents a single result: a method for systematically evaluating the performance of a gradual type system. It is illustrated with an application to Typed Racket, a mature implementation of macro-level gradual typing. We find that Typed Racket’s cost of soundness is not tolerable. If applying our method to other gradual type system implementations yields similar results, then sound gradual typing is dead. The insight behind the method is that to understand the performance of a gradual type system, it is necessary to simulate how a maintenance programmer chooses to add types to an existing software system. For practical reasons, such as limited developer resources or access to source code, it may be possible to add types to only a part of the system. Our method must therefore simulate all possibilities. Thus, applying our method to Typed Racket requires annotating all $n$ modules with types. The resulting collection of $2^n$ modules is then used to create $2^n$ configurations. The collection of these configurations forms a complete lattice with the untyped configuration at the bottom and the fully typed one at the top. The points in between represent configurations in which some modules are typed and others are untyped. Adding types to an untyped module in one of these configurations yields a configuration at the next level of the lattice. In short, the lattice mimics all possible choices of single-module type conversions a programmer faces when a maintenance task comes up. A performance evaluation of a system for gradual typing must time these configurations of a benchmark and extract information from these timings. Section 2 introduces the evaluation method in detail, including the information we retrieve from the lattices and how we parameterize these retrievals. The timings may answer basic questions such as how many of these configurations could be deployed without degrading performance too much. We apply our method to Typed Racket, the gradually typed sister language of Racket. With nine years of development, Typed Racket is the oldest and probably most sophisticated implementation of gradual typing. Furthermore, Typed Racket has also acquired a fair number of users, which suggests adequate performance for these commercial and open source communities. The chosen benchmark programs originate from these communities and range from 150 to 7,500 lines of code. Section 3 presents these benchmarks in detail. Section 4 presents the results from running all configurations of the Typed Racket benchmarks according to the metrics spelled out in section 2. We interpret the ramifications of these rather negative results in section 5 and discuss the threats to validity of these conclusions. The section also includes our report on a preliminary investigation into the possible causes of the slowdowns in our benchmark configurations. 2. Benchmarking Software Evolution Our evaluation method is inspired by our previous work on extending functional Typed Racket to the object-oriented aspects of Racket, in which we use a lattice-style approach for a preliminary performance evaluation [25]. By inspecting the entire lattices of typed/untyped configurations of two small game systems, we identified and then eliminated a major performance bottleneck from the implementation. Our previous performance evaluation was conducted in tandem with the design and implementation of Typed Racket, and thus the final results were relatively positive. In contrast, we conduct our current evaluation completely independently of Typed Racket’s implementation efforts.1 Let us re-articulate the salient points from our previous work: - A software system configuration is a sequence of \( n \) modules. - Each module in a software system configuration is either typed or untyped. - A configuration \( c_l \) is greater than a configuration \( c_u \) (or equal) if \( c_l \) uses a typed module for every position in the sequence for which \( c_u \) uses a typed module. - The collection of all configurations of length \( n \) forms a complete lattice of size \( 2^n \). The bottom element is the completely untyped configuration; the top element is the completely typed one. We speak of a performance lattice to describe this idea. Our contribution is to exploit the lattice-oriented approach to benchmarking for a summative evaluation. To this end, we imagine software engineers who are considering the use of gradual typing for some program and consider what kinds of questions may influence their decision. Based on this first step, we formulate a small number of parameterized, quantitative measures that capture possible answers to these questions. When the configuration consists of a small number of modules, the software engineers might be able to equip the entire program with type annotations in one fell swoop. Such a fully annotated system should perform as well as the original, untyped version—and if the gradual type system is integrated with the compiler, it may even run faster because the compiler can apply standard type-based optimization techniques. **Definition (typed/untyped ratio)** The typed/untyped ratio of a performance lattice is the time needed to run the top configuration divided by the time needed to run the bottom configuration. Unfortunately, this assumption overlooks the realities of implementations of gradual typing. Some modules simply cannot be equipped with types because they use linguistic constructs that the type system does not support. Furthermore, completely typed configurations still use the run-time libraries of the underlying untyped language. In particular, Typed Racket’s run-time system remains largely untyped. As a result, even the completely typed configurations of our benchmarks usually import constants, functions, and classes from an untyped module in the run-time system. When these values cross this boundary at run-time, the contract system performs checks, and that imposes additional costs. To address this issue, the implementors of Typed Racket have enlarged their trusted code base with unchecked type environments that cover frequently imported parts of the run-time system. The next section explains what “completely typed” means for the individual benchmarks. When the software system configuration consists of a reasonably large number of modules, no software engineering team can annotate the entire system with types all at once. Every effort is bound to leave the configuration in a state in which some modules are typed and some others are untyped. As a result, the configuration is likely to suffer from the software contracts that the gradual type system injects at the boundaries between the typed and the untyped portions. If the cost is tolerable, the configuration can be released and can replace the currently deployed version. The runtime costs may not be tolerable, however, as our previous work observes. In that case, the question is how much more effort the software engineers have to invest to reach a releasable configuration. That is, how many more modules must be converted before the performance is good enough for the new configuration to replace the running one. To capture this idea, we formulate the following definition of “deliverable configurations.” **Definition (N-deliverable)** A configuration in a performance lattice is \( N \)-deliverable if its performance is no worse than an \( N x \) slowdown compared to the completely untyped configuration. We parameterize this definition over the slowdown factor that a team may consider acceptable. One team may think of a 1.1x slowdown as barely acceptable, while another one may tolerate a slowdown of an order of magnitude [25]. Even if a configuration is not deliverable, it might be suitably fast to run the test suites and the stress tests. A software engineering team can use such a configuration for development purposes, but it may not deliver it. The question is how many configurations of a performance lattice are usable in that sense. In order to formulate this criteria properly, we introduce the following definition of usable configurations. **Definition \((N/M)-usable\)** A configuration in a performance lattice is \( N/M \)-usable if its performance is worse than an \( Nx \) slowdown and no worse than an \( Mx \) slowdown compared to the completely untyped configuration. --- 1 In terminology borrowed from the education community [20], we conducted a formative evaluation while this paper conducts a summative evaluation to assess the post-intervention state of the system. Using the first parameter, we exclude deliverable configurations from the count. The second parameter specifies the positive boundary, i.e., the acceptable slowdown factor for a usable configuration. **Definition (unacceptable)** For any choice of $N$ and $M$, a configuration is unacceptable if it is neither $N$-deliverable nor $N/M$-usable. Finally, we can also ask the question how much work a team has to invest to turn unacceptable configurations into useful or even deliverable configurations. In the context of macro-level gradual typing, one easy way to measure this amount of work is to count the number of modules that have to be annotated with types before the resulting configuration becomes usable or deliverable. Here is the precise definition. **Definition ($L$-step $N/M$-usable)** A configuration is $L$-step $N/M$-usable if it is unacceptable and at most $L$ type conversion steps away from a $N$-deliverable or a $N/M$-usable configuration. This paper thus proposes an evaluation method based on a systematic exploration of the performance lattice. The benefit of parameterized metrics is that every reader can interpret the raw data with his or her own choices for $L$, $N$, and $M$. ### 3. The Benchmark Programs For our evaluation of Typed Racket, we use a suite of twelve programs. They are representative of actual user code yet small enough so that an exhaustive exploration of the performance lattice remains tractable. #### 3.1 Overview The table in figure 2 lists and summarizes our twelve benchmark programs. For each, we give an approximate measure of the program’s size, a diagram of its module structure, and a worst-case measure of the contracts created and checked at runtime. Size is measured by the number of modules and lines of code (LOC) in a program. Crucially, the number of modules also determines the number of gradually-typed configurations to be run when testing the benchmark, as a program with $n$ modules can be gradually-typed in $2^n$ possible configurations. Lines of code is less important for evaluating macro-level gradual typing, but gives a sense of the overall complexity of each benchmark. Moreover, the Type Annotations LOC numbers are an upper bound on the annotations required at any stage of gradual typing because each typed module in our experiment fully annotates its import statements. The column labeled “Other LOC” measures the additional infrastructure required to run each project for all typed-untyped configurations. This count includes project-wide type definitions, typed interfaces to untyped libraries, and any so-called type adaptor modules (see below). The module structure graphs show a dot for each module in the program. An arrow is drawn from module A to module B when module A imports definitions from module B. When one of these modules is typed and the other untyped, the imported definitions are wrapped with a contract to ensure type soundness. To give a sense of how “expensive” the contracts at each boundary are, we color arrows to match the absolute number of times contracts at a given boundary are checked. These numbers are independent from the actual configurations. The colors fail to show the cost of checking data structures imported from another library or factored through an adaptor module. For example, the `kcf` graph has many thin black edges because the modules only share data definitions. The column labeled “Adaptors + Libraries” reports the proportion of observed contract checks due to adaptor modules and libraries. #### 3.2 Adaptor Modules A quirk in Racket’s structure-type definitions calls for one twist to an otherwise straightforward setup of benchmark configurations. Consider the following structure-type definition from `gregor`: ``` (struct DateTime [date time jd]) ``` Its evaluation introduces a new class of data structures via a constructor (`DateTime`), a predicate (`DateTime?`), and a number of selectors. A second evaluation creates a disjoint class of structures, meaning the selectors for the first class do not work on the second and vice versa. If a structure-type definition is exported, a configuration may place the definition in an untyped module and its clients into the typed portion of the program. As explained below, importing a `struct` demands that each client assigns a type to the structure-type definition. Now, when these typed clients wish to exchange instances of these structure types, the type checker must prove that the static types match. But due to the above quirk, the type system assigns generative static types to imported structure types. Thus, even if the developers who annotate the two clients with types choose the same names for the imported structure types, the two clients actually have mutually incompatible static types. Figure 1 illuminates the problems with the left-hand diagram. An export of a structure-type definition from the untyped module (star-shaped) to the two typed clients (black squares) ensures that the type checker cannot equate the two assigned static types. The right-hand side of the figure explains the solution. We manually add a `type adaptor module`. Such adaptor modules are specialized typed interfaces to untyped code. The typed clients import structure-type definitions and the associated static types exclusively from the type adaptor, ensuring that only one canonical type is generated for each structure type. Untyped clients remain untouched and continue to use the original untyped file. Adaptor modules also reduce the number of type annotations needed at boundaries because all typed clients can reference a single point of control. Therefore we expect type adaptor modules to be of independent use to practitioners, rather than just a synthetic byproduct of our setup. #### 3.3 Program Descriptions This section briefly describes each benchmark, noting the dependencies and required adaptor modules. Unless otherwise noted, the benchmarks rely only on core Racket libraries and do not use adaptor modules. We credit program authors in parentheses; except for sieve, all programs are independently useful. **Sieve (Ben Greenman)** This program finds prime numbers using the Sieve of Eratosthenes and is our smallest benchmark. It contains --- 2 In our experimental setup, type adaptors are available to all configurations as library files. <table> <thead> <tr> <th>Project name</th> <th>Modules</th> <th>Untyped LOC</th> <th>Type Ann. LOC</th> <th>Other LOC</th> <th>Module structure</th> <th>Adaptors + Libraries</th> </tr> </thead> <tbody> <tr> <td>sieve</td> <td>2</td> <td>35</td> <td>17</td> <td>0</td> <td></td> <td>0%</td> </tr> <tr> <td>morse-code</td> <td>4</td> <td>216</td> <td>29</td> <td>0</td> <td></td> <td>0%</td> </tr> <tr> <td>mbta</td> <td>4</td> <td>369</td> <td>77</td> <td>89</td> <td></td> <td>79%</td> </tr> <tr> <td>zordoz</td> <td>5</td> <td>1404</td> <td>285</td> <td>214</td> <td></td> <td>99%</td> </tr> <tr> <td>suffixtree</td> <td>6</td> <td>545</td> <td>125</td> <td>40</td> <td></td> <td>97%</td> </tr> <tr> <td>lnm</td> <td>6</td> <td>501</td> <td>120</td> <td>62</td> <td></td> <td>46%</td> </tr> <tr> <td>kcfa</td> <td>7</td> <td>248</td> <td>47</td> <td>141</td> <td></td> <td>99%</td> </tr> <tr> <td>snake</td> <td>8</td> <td>161</td> <td>50</td> <td>27</td> <td></td> <td>93%</td> </tr> <tr> <td>tetris</td> <td>9</td> <td>305</td> <td>71</td> <td>38</td> <td></td> <td>99%</td> </tr> <tr> <td>synth</td> <td>10</td> <td>837</td> <td>142</td> <td>33</td> <td></td> <td>47%</td> </tr> <tr> <td>gregor</td> <td>13</td> <td>996</td> <td>164</td> <td>103</td> <td></td> <td>78%</td> </tr> <tr> <td>quad</td> <td>16</td> <td>6722</td> <td>300</td> <td>241</td> <td></td> <td>45%</td> </tr> </tbody> </table> Figure 2: Characteristics of the benchmarks two modules: a streams library and the sieve code. We wrote this benchmark to illustrate the pitfalls of sound gradual typing. **Morse code (John Clements & Neil Van Dyke)** This script is adapted from a morse code training program. The original program plays a morse code audio clip, reads keyboard input, and scores the input based on its Levenshtein distance from the correct answer. Our benchmark setup generates morse code strings and runs the Levenshtein algorithm on a list of frequently used words. **MBTA (Matthias Felleisen)** The mbta program builds a representation of Boston’s public transit system and answers reachability queries. It relies on an untyped graph library. The original program responded asynchronously to queries with a server thread. We instead measure a synchronous version of the program to ensure compatibility with Racket’s stack-based profiling tools. **Zordoz (Ben Greenman)** This tool is used for exploring and counting the frequency of Racket bytecode structures. It operates on the Racket compiler’s untyped zo data structures. Since these data structures are not natively supported in Typed Racket, even the completely typed program incurs some dynamic overhead. **Suffixtree (Danny Yoo)** This library implements a longest common substring algorithm using Ukkonen’s suffix tree algorithm. While the library has minimal external dependencies, it calls for one adaptor module for the algorithm’s internal data structures. **LNM (Ben Greenman)** This script analyzes the measurements included in this paper and generates figures 4 and 5. Most of this benchmark’s running time is spent generating figures using Typed Racket’s plot library, so the untyped version of this program is noticeably less performant. This program relies on an untyped image rendering library and uses two adaptor modules. **KCFA (Matt Might)** This program implements a simple control flow analysis for a lambda calculus. The language definitions and analysis are spread across seven modules, four of which require adaptors because they introduce new datatypes. **Snake (David Van Horn)** This program is based on a contract verification benchmark by Nguy et al. [16]. It implements a game where a growing and moving snake tries to eat apples while avoiding walls and its own tail. Our benchmark runs a pre-recorded history of moves altering the game state and does not display a GUI. We use one adaptor module to represent the game datatypes, but otherwise the program is self-contained. **Tetris (David Van Horn)** This program is taken from the same benchmark suite as snake [16] and implements the eponymous game. Like snake, the benchmark runs a pre-recorded set of moves. Using it here requires one adaptor module. **Synth (Vincent St-Amour & Neil Toronto)** The synth benchmark is a sound synthesis example from St-Amour et al.’s work on feature-specific profiling [23]. The program consists of nine modules, half of which are from Typed Racket’s library array. In order to run these library modules in all typed-untyped configurations we create an adaptor module for the underlying array data structure. **Gregor (Jon Zoppeiri)** This benchmark consists of thirteen modules and stress-tests a date and time library. The original library uses a library for ad-hoc polymorphism that is not supported by Typed Racket. Our adaptation instead uses a mono-typed variant of this code and removes the string parsing component. The benchmark uses two adaptor modules and relies on a small, untyped library for acquiring data on local times. --- 3 [http://github.com/jbclements/morse-code-trainer](http://github.com/jbclements/morse-code-trainer) 5 [http://github.com/stamourv/synth](http://github.com/stamourv/synth) --- **Quad (Matthew Butterick)** This project implements a type-setting library. It depends on an external constraint satisfaction solver library (to divide lines of text across multiple columns) and uses two adaptor modules. The original author provided both untyped and fully typed variants. ### 4. Evaluating Typed Racket Measuring the running time for the performance lattices of our benchmarks means compiling, running, and timing thousands of configurations. Each configuration is run 30 times to ensure that the timing is not affected by random factors; some configurations take minutes to run. Here we present our measurements in terms of the metrics of section 2. The first subsection discusses one benchmark in detail, demonstrating how we create the configurations, how the boundaries affect the performance of various configurations, and how the Typed Racket code base limits the experiment. The second subsection explains our findings. The last subsection interprets them. #### Experimental setup Due to the high resource requirements of evaluating the performance lattices, experiments were run on multiple machines. Machine A with 12 physical Xeon E5-2630 2.30GHz cores and 64GB RAM, Machine B with 4 physical Core i7-4790 3.60GHz cores and 16GB RAM, Machine C with with 4 physical Core i7-3770K 3.50GHz cores and 32GB RAM, and a set of Machines D with identical configurations of 20 physical Xeon E5-2680 2.80GHz cores with 64GB RAM. All machines run a variant of Linux and all benchmarks were run on Racket v6.2. The following benchmarks were run on machine A: sieve, kcfa, and gregor. On machine B: suffixtree, morse-code, mbta, and lnm. On machine C: zordoz and quad. On machine D: snake, synth, and tetris. For each configuration we report the average of 30 runs. All of our runs use a single core for each configuration. We performed sanity checks to validate that performance differentials reported in the paper were not affected by the choice of machine. #### 4.1 Suffixtree in Depth To illustrate the key points of the evaluation, this section describes one of the benchmarks, suffixtree, and explains the setup and its timing results in detail. Suffixtree consists of six modules: data to define labels and tree nodes, label with functions on suffixtree node labels, lcs to compute longest common substrings, main to apply lcs to data, structs to create and traverse suffix tree nodes, ukkonen to build suffix trees via Ukkonen’s algorithm. Each module is available with and without type annotations. Each configuration thus links six modules, some of them typed and others untyped. Typed modules require type annotations on their data definitions and functions. Modules provide their exports with types, so that the type checker can cross-check modules. A typed module may import values from an untyped module, which forces the corresponding require specifications to come with types. Consider this example: ```racket (require (only-in "label.rkt" make-label ...)) ``` The server module is called label.rkt, and the client imports specific values, e.g., make-label. This specification is replaced with a require/typed specification where each imported identifier is typed: ```racket (require/typed "label.rkt" [make-label (-> (U String (Vectorof (U Char Symbol))) Label)] ...) ``` 6 The scripts that we use to run the experiments are available in our artifact: [http://www.ccs.neu.edu/racket/pubs/popl15-tfgnvf](http://www.ccs.neu.edu/racket/pubs/popl15-tfgnvf) The types in a require/typed form are compiled into contracts for the imported values. For example, if some imported variable is declared to be a Char, the check char? is performed as the value flows across the module boundary. Higher-order types (functions, objects, or classes) become contracts that wrap the imported value and which check future interactions of this value with its context. The performance costs of gradual typing thus consist of wrapper allocation and run-time checks. Moreover, the compiler must assume that any value could be wrapped, so it cannot generate direct allocation and run-time checks. Moreover, the compiler must assume that any value could be wrapped, so it cannot generate direct allocation and run-time checks. This new syntax can determine whether the server module is typed or untyped. It installs contracts if the server module is untyped, and it ignores the annotation whether the server module is typed or untyped. Performance Lattice. Figure 3 shows the performance lattice annotated with the timing measurements. The lattice displays each of the modules in the program with a shape. A filled black shape means the module is typed, an open shape means the module is untyped. The shapes are ordered from left to right and correspond to the modules of suffixtree in alphabetical order: data, label, lcs, main, structs, and ukkonen. For each configuration in the lattice, the ratio is computed by dividing the average timing of the untyped program by the untyped average. The figure omits standard deviations as they are small enough to not affect the discussion. The fully typed configuration (top) is faster than the fully untyped configuration by around 30%, which puts the typed/untyped ratio at 0.7. This can be explained by Typed Racket’s optimizer, which performs specialization of arithmetic operations and field accesses, and can eliminate some bounds checks [27]. When the optimizer is turned off, the ratio goes back up to 1. Sadly, the performance improvement of the typed configuration is the only good part of this benchmark. Almost all partially typed configurations exhibit slowdowns of up to 105x. Inspection of the lattice suggests several points about these slowdowns: - Adding type annotations to the main module neither subtracts nor adds overhead because it is a driver module. - Adding types to any of the workhorse modules—data, label, or structs—while leaving all other modules untyped causes slowdown of at least 35x. This group of modules are tightly coupled. Laying down a type-untyped boundary to separate elements of this group causes many crossings of values, with associated contract-checking cost. - Inspecting data and label further reveals that the latter depends on the former through an adaptor module. This adaptor introduces a contract boundary when either of the two modules is untyped. When both modules are typed but all others remain untyped, the slowdown is reduced to about 13x. - The structs module depends on data in the same fashion and additionally on label. Thus, the configuration in which both structs and data are typed still has a large slowdown. When all three modules are typed, the slowdown is reduced to 5x. - Finally, the configurations close to the worst slowdown case are those in which the data module is left untyped but several of the other modules are typed. This makes sense given the coupling noted above; the contract boundaries induced between the untyped data and other typed modules slow down the program. The module structure diagram for suffixtree in figure 2 corroborates the presence of this coupling. The rightmost node in that diagram corresponds to the data module, which has the most in-edges in that particular graph. We observe a similar kind of coupling in the simpler sieve example, which consists of just a data module and its client. The performance lattice for suffixtree is bad news for gradual typing. It exhibits performance “valleys” in which a maintenance programmer can get stuck. Consider starting with the untyped program, and for some reason choosing to add types to label. The program slows down by a factor of 88x. Without any guidance, a developer may choose to then add types to structs and see the program slow down to 104x. After that, typing main (104x), ukkonen (99x), and lcs (103x) do little to improve performance. It is only when all the modules are typed that performance becomes acceptable again (0.7x). Figure 4: $L$-step $N/M$-usable results for the first six benchmarks **kcfa** - Typed/Untyped Ratio: 1.00x - Max. Overhead: 22.67x - Mean Overhead: 9.23x - 3-Deliverable: 32 (25%) - 3/10-Usable: 48 (38%) **snake** - Typed/Untyped Ratio: 0.92x - Max. Overhead: 121.51x - Mean Overhead: 32.30x - 3-Deliverable: 4 (2%) - 3/10-Usable: 28 (11%) **tetris** - Typed/Untyped Ratio: 0.97x - Max. Overhead: 117.28x - Mean Overhead: 33.34x - 3-Deliverable: 128 (25%) - 3/10-Usable: 0 (0%) **synth** - Typed/Untyped Ratio: 1.03x - Max. Overhead: 85.90x - Mean Overhead: 39.69x - 3-Deliverable: 15 (1%) - 3/10-Usable: 73 (7%) **gregor** - Typed/Untyped Ratio: 1.22x - Max. Overhead: 4.72x - Mean Overhead: 2.72x - 3-Deliverable: 5644 (69%) - 3/10-Usable: 2548 (31%) **quad** - Typed/Untyped Ratio: 13.34x - Max. Overhead: 56.43x - Mean Overhead: 31.50x - 3-Deliverable: 2046 (3%) - 3/10-Usable: 5637 (9%) --- **Figure 5:** $L$-step $N/M$-usable results for the remaining benchmarks 4.2 Reading the Figures Our method defines the number of \(L\)-step \(N/M\)-usable configurations as the key metric for measuring the quality of a gradual type system. For this experiment we have chosen values of 3x and 10x for \(N\) and \(M\), respectively, and allow up to 2 additional type conversion steps. These values are rather liberal, but serve to ground our discussion. The twelve rows of graphs in Figures 4 and 5 summarize the results from exhaustively exploring the performance lattices of our benchmarks. Each row contains a table of summary statistics and one graph for each value of \(L\) between 0 and 2. The typed/untyped ratio is the slowdown or speedup of fully typed code over untyped code. Values smaller than 1.0 indicate a speedup due to Typed Racket optimizations. Values larger than 1.0 are slowdowns caused by interaction with untyped libraries or untyped parts of the underlying Racket runtime. The ratios range between 0.28x (\(lnm\)) and 3.22x (\(zordoz\)). The maximum overhead is computed by finding the running time of the slowest configuration and dividing it by the running time of the untyped configuration. The average overhead is obtained by computing the average over all configurations (excluding the fully-typed and untyped configurations) and dividing it by the running time of the untyped configuration. Maximum overheads range from 1.25x (\(lnm\)) to 168x (\(tetris\)). Average overheads range from 0.6x (\(lnm\)) to 68x (\(tetris\)). The 3-deliverable and 3/10-usable counts are computed for \(L=0\). In parentheses, we express these counts as a percentage of all configurations for the program. The three cumulative performance graphs are read as follows. The x-axis represents the slowdown over the untyped program (from 1x to 20x). The y-axis is a count of the number of configurations (from 0 to 20) scaled so that all graphs are the same height. If \(L\) is zero, the blue line represents the total number of configurations with performance no worse than the overhead on the x-axis. For arbitrary \(L\), the blue line gives the number of configurations that can reach a configuration with performance no worse than the overhead on the x-axis in at most \(L\) conversion steps. The ideal result would be a flat line at a graph’s top. Such a result would mean that all configurations are as fast as (or faster than) the untyped one. The worst scenario is a flat line at the graph’s bottom, indicating that all configurations are more than 20x slower than the untyped one. For ease of comparison between graphs, a dashed (red) horizontal line indicates the 60% point along each project’s y-axis. 4.3 Interpretation The ideal shape is difficult to achieve because of the overwhelming cost of the dynamic checks inserted at the boundaries between typed and untyped code. The next-best shape is a nearly-vertical line that reaches the top at a low x-value. All else being equal, a steep slope anywhere on the graph is desirable because the number of acceptable programs quickly increases at that point. For each benchmark, we evaluate the actual graphs against these expectations. Our approach is to focus on the left column, where \(L=0\), and to consider the center and right column as rather drastic countermeasures to recover performance.\(^5\) **Sieve** The flat line at \(L=0\) shows that half of all configurations suffer unacceptable overhead. As there are only 4 configurations in the lattice for sieve, increasing \(L\) improves performance. **Morse code** The steep lines show that a few configurations suffer modest overhead (below 2x), otherwise \texttt{morse-code} performs well. Increasing \(L\) improves the worst cases. **MBTA** These lines are also steep, but flatten briefly at 2x. This coincides with the performance of the fully-typed configuration. As one would expect, freedom to type additional modules adds configurations to the 2-deliverable equivalence class. **Zordoz** Plots here are similar to \texttt{mbta}. There is a gap between the performance of the fully-typed configuration and the performance of the next-fastest lattice point. **Suffixtree** The wide horizontal areas are explained by the performance lattice in figure 3: configurations’ running times are not evenly distributed but instead vary drastically when certain boundaries exist. Increasing \(L\) significantly improves the number of acceptable configurations at 10x and even 3x overhead. **LN/M** These results are ideal. Note the large y-intercept at \(L=0\). This shows that very few configurations suffer any overhead. **KCFA** The most distinctive feature at \(L=0\) is the flat portion between 1x and 6x. This characteristic remains at \(L=1\), and overall performance is very good at \(L=2\). **Snake** The slope at \(L=0\) is very low. Allowing \(L=1\) brings a noticeable improvement above the 5x mark, but the difference between \(L=1\) and \(L=2\) is small. **Tetris** Each \texttt{tetris} plot is essentially a flat line. At \(L=0\) roughly 1/3 of configurations lie below the line. This improves to 2/3 at \(L=1\) and only a few configurations suffer overhead when \(L=2\). **Synth** Each slope is very low. Furthermore, some configurations remain unusable even at \(L=2\). These plots have few flat areas, which implies that overheads are spread evenly throughout possible boundaries in the program. **Gregor** These steep curves are impressive given that \texttt{gregor} has 13 modules. Increasing \(L\) brings consistent improvements. **Quad** The quad plots follow the same pattern as \texttt{mbta} and \texttt{zordoz}, despite being visually distinct. In all three cases, there is a flat slope for overheads below the typed/untyped ratio and a steep increase just after. The high typed/untyped ratio is explained by small differences in the original author-supplied variants. 5. Quo Vadis Sound Gradual Typing? Unsound type systems are useful. They document the code, find bugs at compile-time, and enable the IDE to assist programmers. Sound type systems are useful and meaningful. A soundly typed program cannot go wrong, up to a well-defined set of run-time exceptions [29]. When a typed program raises an exception, the accompanying message usually pinpoints the location of the problem in the program source. From this description it is clear why programmers eventually wish to annotate programs in untyped languages with types and, ideally, with sound types. Types directly and indirectly increase a programmer’s productivity, and sound types help with testing, debugging, and other maintenance tasks. In short, sound gradual typing seems to be a panacea. The problem is that, according to our measurements, the cost of enforcing soundness is overwhelming. Figures 4 and 5 clarify just how few partially typed configurations are usable by developers or deliverable to customers. For almost all benchmarks, the lines are below the (red) horizontal line of acceptability. Even with extremely liberal settings for \(N\) and \(M\), few configurations are \(N\)-deliverable or \(N/M\)-usable. Worse, investing more effort into type annotation does not seem to pay off. In practice, converting a module takes a good amount of time, meaning that L=2 is again a liberal choice. But even this liberal choice does not increase the number of acceptable configurations by much; worse, it unrealistically assumes those two modules best-suited to improve performance. Put differently, the number of L-step N/M acceptable configurations remains small with liberal choices for all three parameters. The application of our evaluation method projects an extremely negative image of sound gradual typing. While we are confident that the method captures the spirit of the goals of gradual typing, our particular application of the method and its results must be put in perspective. Section 5.1 explains why the evaluation of Typed Racket may look overly negative. Section 5.2 presents an analysis of the worst elements in the twelve lattices and highlights those kinds of contracts that impose the most significant cost. 5.1 Threats to Validity of Conclusion We have identified four threats to validity. First, our benchmarks are relatively small due to constraints on our computing infrastructure, but even those consume considerable resources. To obtain results for these benchmarks in a reasonable amount of time, they are run using multiple cores and the configurations are divided amongst the cores. Each configuration is put into a single process running a separate instance of the Racket VM pinned to a single core. This parallelism may introduce confounding variables due to, e.g., shared caches or main memory. We have attempted to control for this case and, as far as we can tell, executing on an unloaded machine does not make a significant difference to our results. Second, several of our benchmarks import some modules from Racket’s suite of libraries that remain untyped throughout the process, including for the fully typed configuration. While some of these run-time libraries come in the trusted code base—meaning Typed Racket knows their types and the types are not compiled to contracts—others are third-party libraries that impose a cost on all configurations. In principle, these interfaces might substantially contribute to the running-time overhead of partially typed configurations. Regardless, given the low typed/untyped ratios, these libraries are unlikely to affect our conclusions. Third, the feasible set of type annotations for a program component is rarely unique in a gradually typed system. Since types are translated into contracts in Typed Racket, the choice of type annotations may affect performance. All of our case studies use reasonable type annotations, but type annotations with superior performance may exist. For example, one class-based benchmark (not included, completed after submission) exhibits noticeable differences though the overall result remains the same. Generally speaking, our results may not be fully representative. Then again, it is still a failure of gradual typing if a programmer must divine the best possible type annotations to obtain reasonable performance. Finally, we articulate our conclusions on the basis of our preliminary implementation technology. Typed Racket compiles to Racket, which uses rather conventional JIT compilation technology. It makes no attempt to reduce the overhead of contracts or to exploit contracts for optimizations. It remains to be seen whether contract-aware compilers can reduce the significant overhead that our evaluation shows. Nevertheless, we are convinced that even if the magnitude of the slowdowns are reduced, some pathologies will remain. 5.2 What are the Bottlenecks? To analyze the cost of contract checks, we used the feature-specific profiler [23] on each benchmark’s slowest configuration. Figure 6 summarizes our findings. The leftmost data column (%C) gives the percent of each benchmark’s total running time that was spent checking contracts. These percentages are the average of ten trials; the numbers in parentheses (S.E.) represent the standard error. Except for the short-running benchmarks (gregor, morse-code, and mbta), we see little variability across trials. As expected, the programs spend a substantial proportion of their running time checking contracts. The remaining columns of figure 6 report what percentage of each benchmark’s contract-checking execution time is spent on a particular variety of contract: - Adaptor contracts separate a typed module from an untyped module with data structures. - Higher-order contracts are function contracts with at least one function in their domain or co-domain. - Library contracts separate an untyped library from typed modules or vice versa (in the case of lmn). - The shape (-> T any/c) refers to contracts with a protected argument and an unchecked co-domain. Contracts of this shape typically guard typed functions called in untyped modules. - Conversely, (-> any/c T) guards functions with (any number of) unchecked arguments and protected co-domains. For example, if a typed module calls an untyped function with immutable arguments, Typed Racket statically proves that the untyped function is given well-typed arguments but must insert a contract to verify the function’s result. - The (-> any/c boolean?) column measures the time spent checking functions that take a single argument and returning a Boolean value. It is thus a subset of the (-> any/c T) column. Other columns overlap as well. The mbta benchmark in particular spends 65% of its contract-checking time on first-order library functions. These checks are always triggered by a typed module on immutable arguments, so Typed Racket optimizes them to (-> any/c T) contracts. Most strikingly, the (-> any/c boolean?) column suggests that on average twenty percent of the time our benchmarks spend checking contracts goes towards checking that predicate functions satisfy the trivial (-> any/c boolean?) contract. Moreover, nearly all of these predicates are generated by Racket structure definitions, so their type correctness might be assumed. Removing these contracts or optimizing the cost of indirectness seems like a clear place for Typed Racket to improve. In contrast, the adaptor and library columns suggest that the apparently high cost of predicate contracts may just be a symptom of placing a typed/untyped boundary between a structure type definition and functions closely associated with the data. One example of this is zordoz; indeed, the purpose of that code is to provide an interface to native compiler data structures. In nearly all worst-case measurements for benchmarks using adaptor modules the adaptor and (-> any/c boolean?) contracts seem to account for a huge proportion of all contracts. The quad benchmark in fact spends 93% of its contract-checking time validating data structures, which are stored in fixed-length lists rather than in structure types. These lists do not require an adaptor, but their types translate to contracts that are far more expensive than plain structure type predicates. The only exception is synth. It spends much more time creating structured data from raw vectors than accessing the data. Higher-order contracts show up in only a few of the benchmark programs. Specifically, only synth, sieve, and zordoz make heavy use of higher-order functions across contract boundaries. Unlike the cost of first-order contracts, the costs of these higher-order contracts is quite apparent in these programs. Finally, the \((-\rightarrow \text{ T } \text{ any/c})\) and \((-\rightarrow \text{ any/c T})\) columns give a rough impression of whether untyped or typed modules trigger more contract checks. We confirmed these findings by inspecting the individual programs. For all but three benchmarks, the high-cost contracts are triggered by calls from a typed module into an untyped library or data definition. This includes kcfas, although half its calls from typed to untyped code used mutable arguments and hence could not be reduced to \(\text{any/c}\). The exceptions are lnm, synth, and quad, which suffer from slowdowns when untyped modules import definitions from typed ones. 6. The State of the Related Work Gradual typing is a broad area teeming with both theoretical and practical results. This section focuses on implementations rather than formal models, paying special attention to performance evaluation of gradual type systems. 6.1 Sound Gradual Type Systems Gradual typing has already been applied to a number of languages: Python [28], Smalltalk [2], Thorn [7] and TypeScript [18, 19]. None of the projects report on conclusive studies of gradual typing’s impact on performance. The authors of Reticulated Python recognized the performance issues of gradual typing and designed the language to allow the exploration of efficient cast mechanisms. However, Vitousek et al. note that “Reticulated programs perform far worse than their formal models, paying special attention to performance evalu- <table> <thead> <tr> <th>Project</th> <th>%C</th> <th>(S.E.)</th> <th>adaptor</th> <th>higher-order</th> <th>library</th> <th>((-\rightarrow \text{ T } \text{ any/c}))</th> <th>((-\rightarrow \text{ any/c T}))</th> <th>((-\rightarrow \text{ any/c boolean?}))</th> </tr> </thead> <tbody> <tr> <td>sieve</td> <td>92</td> <td>(2.33)</td> <td>0</td> <td>46</td> <td>0</td> <td>0</td> <td>0</td> <td>54</td> </tr> <tr> <td>morse-code</td> <td>29</td> <td>(6.8)</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>100</td> <td>0</td> </tr> <tr> <td>mbita</td> <td>39</td> <td>(3.65)</td> <td>0</td> <td>0</td> <td>65</td> <td>0</td> <td>65</td> <td>0</td> </tr> <tr> <td>zordoz</td> <td>95</td> <td>(0.1)</td> <td>0</td> <td>55</td> <td>45</td> <td>0</td> <td>99</td> <td>43</td> </tr> <tr> <td>suffixintree</td> <td>94</td> <td>(0.18)</td> <td>98</td> <td>&lt;1</td> <td>0</td> <td>2</td> <td>94</td> <td>18</td> </tr> <tr> <td>lnm</td> <td>81</td> <td>(0.73)</td> <td>0</td> <td>9</td> <td>99</td> <td>91</td> <td>0</td> <td>0</td> </tr> <tr> <td>kcfas</td> <td>91</td> <td>(0.26)</td> <td>100</td> <td>0</td> <td>0</td> <td>0</td> <td>54</td> <td>31</td> </tr> <tr> <td>snake</td> <td>98</td> <td>(0.21)</td> <td>93</td> <td>0</td> <td>0</td> <td>1</td> <td>99</td> <td>49</td> </tr> <tr> <td>tetris</td> <td>96</td> <td>(0.35)</td> <td>89</td> <td>0</td> <td>0</td> <td>11</td> <td>89</td> <td>44</td> </tr> <tr> <td>synth</td> <td>83</td> <td>(1.22)</td> <td>51</td> <td>90</td> <td>0</td> <td>29</td> <td>20</td> <td>0</td> </tr> <tr> <td>gregor</td> <td>83</td> <td>(4.01)</td> <td>78</td> <td>0</td> <td>3</td> <td>7</td> <td>85</td> <td>31</td> </tr> <tr> <td>quad</td> <td>80</td> <td>(0.96)</td> <td>&lt;1</td> <td>1</td> <td>0</td> <td>3</td> <td>&lt;1</td> <td>&lt;1</td> </tr> </tbody> </table> Figure 6: Profiling the worst-case contract overhead Thorn combines a sound type system with an optional type system, allowing programmers to choose between so-called concrete types and like types [7]. StrongScript follows Thorn’s lead by adding a sound type system (with a limited form of higher-order wrappers) to TypeScript. Thorn has a minimal performance evaluation which shows that by sprinkling a few type annotations over toy benchmarks, speed-ups between 3x and 6x can be obtained [30]. Richards et al. use the same microbenchmark suite as Safe TypeScript and compare the runtimes of type-erased and fully-typed versions using their optimizing compiler. They report “no benchmarks demonstrated slowdown outside of noise” (and up to 20% speedups) on the fully-typed versions [19, pg. 97]. In our lattice terminology, the StrongScript comparison reports typed/untyped ratios only. The performance of intermediate states are not evaluated. 6.2 Optional Type Systems Optional typing can be traced as far back as MACLISP, which allowed users to declare (unchecked) type specifications [15, §14.2] in an otherwise untyped language. The flavor of these annotations, and those in Lisp descendants such as Common Lisp, differ from the contemporary view of optional types as statically-checked annotations for software maintenance. In Lisp systems, these annotations are used for compiler optimizations and dynamic checking. Pluggable type systems are a closely related idea [9, 10], and also belong to the unsound camp. Recent implementations, e.g. Papi et al.’s work for Java [17], layer additional typed reasoning on top of existing typed languages rather than untyped languages. Contemporary optional type systems have been developed for Clojure [8], Lua [14], Python, PHP, ActionScript, Dart, and JavaScript [6]. Since the type annotations in these systems are unsound for typed-untyped interoperation, they incur no runtime overhead from proxy wrapping or dynamic checks. The lack of overheads obviates the need for a performance evaluation such as the one in this paper. Some publications have, however, investigated the performance impact of optional typing with respect to compiler optimizations. Intuitively, one would expect that a compiler could use these annotations as hints to generate faster code. This intuition is borne out by Chang et al. [11] who report significant speed-ups for typed Ac- 10 http://mypy-lang.org 11 http://hacklang.org 13 http://dartlang.org tionScript code over untyped code. But one should take such results with a pinch of salt as they are highly dependent on the quality of the virtual machine used as the baseline. Richards et al. [19] report at most 20% speed up for fully typed JavaScript. They ascribe this unimpressive result to the quality of the optimizations implemented in V8. In other words, V8 is able to guess types well enough that providing it with annotations does not help much. 7. Long Live Sound Gradual Typing In the context of current implementation technology, sound gradual typing is dead. We support this thesis with benchmarking results for all possible gradual typing scenarios for a dozen Racket/Typed Racket benchmarks of various sizes and complexities. Even under rather liberal considerations, few of these scenarios end up in deliverable or usable system configurations. Even allowing for additional conversions of untyped portions of the program does not yield much of an improvement. Our result calls for three orthogonal research efforts. First, Typed Racket is only one implementation of sound gradual typing, and it supports only macro-level gradual typing. Before we declare gradual typing completely dead, we must apply our method to other implementations. The question is whether doing so will yield equally negative results. Safe TypeScript [18] appears to be one natural candidate for such an effort. At the same time, we are also challenged to explore how our evaluation method can be adapted to the world of micro-level gradual typing, where programmers can equip even the smallest expression with a type annotation and leave the surrounding context untouched. We conjecture that annotating complete functions or classes is an appropriate starting point for such an adaptation experiment. Second, Typed Racket’s implementation may not support runtime checks as well as other JIT compilers. Typed Racket elaborates into plain Racket, type-checks the result, inserts contracts between typed and untyped modules, and then uses Racket to compile the result [27]. The latter implements a JIT compiler that open-codes primitive functions. One implication is that code from contracts does not get eliminated even if it is re-evaluated for the same value in a plain loop. A sophisticated JIT compiler may eliminate some of the contract overhead in such cases, but we conjecture that performance pathologies will still remain. Applying our method to an implementation with a more sophisticated compiler, e.g., Pycket [5], may let us validate this conjecture. Third, the acceptance of Typed Racket in the commercial and open-source Racket community suggests that (some) programmers find a way around the performance bottlenecks of sound gradual typing. Expanding this community will take the development of both guidelines on how to go about annotating a large system and performance measurement tools that help programmers discover how to identify those components of a gradually-typed configuration that yield the most benefit (per time investment). St-Amour’s feature-specific profiler [23] and optimization coaches [24] look promising; we used both kinds of tools to find the reason for some of the most curious performance bottlenecks in our measurements. In sum, while we accept that the current implementation technology for gradually-typed programming languages falls short of its promises, we also conjecture that the use of our method will yield useful performance evaluations to guide future research. Above we have spelled out practical directions but even theoretical ideas—such as Henglein’s optimal coercion insertion [13] and the collapsing of chains of contracts [22]—may take inspiration from the application of our method. Data and Code Our benchmarks and measurements are available in our artifact: http://www.ccs.neu.edu/racket/pubs/#pp015-tfgnvf Acknowledgments The authors gratefully acknowledge support from the National Science Foundation (SHF 1518844). They also thank Matthew Butterick, John Clements, Matthew Might, Vincent St-Amour, Neil Toronto, David Van Horn, Danny Yoo, and Jon Zeppieri for providing benchmark code bases. Brian LaChance and Sam Tobin-Hochstadt provided valuable feedback on earlier drafts. Bibliography
{"Source-Url": "http://www.ccs.neu.edu/racket/pubs/popl16-tfgnvf.pdf", "len_cl100k_base": 12785, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 49646, "total-output-tokens": 15103, "length": "2e13", "weborganizer": {"__label__adult": 0.0003712177276611328, "__label__art_design": 0.0002894401550292969, "__label__crime_law": 0.0002474784851074219, "__label__education_jobs": 0.000484466552734375, "__label__entertainment": 5.829334259033203e-05, "__label__fashion_beauty": 0.00014829635620117188, "__label__finance_business": 0.0001575946807861328, "__label__food_dining": 0.00033593177795410156, "__label__games": 0.0004575252532958984, "__label__hardware": 0.000606536865234375, "__label__health": 0.0003829002380371094, "__label__history": 0.0001983642578125, "__label__home_hobbies": 7.271766662597656e-05, "__label__industrial": 0.00026679039001464844, "__label__literature": 0.00028252601623535156, "__label__politics": 0.0002168416976928711, "__label__religion": 0.0004317760467529297, "__label__science_tech": 0.00540924072265625, "__label__social_life": 8.177757263183594e-05, "__label__software": 0.0033702850341796875, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002994537353515625, "__label__transportation": 0.0004379749298095703, "__label__travel": 0.00019109249114990232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64577, 0.04254]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64577, 0.31114]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64577, 0.87518]], "google_gemma-3-12b-it_contains_pii": [[0, 5582, false], [5582, 13112, null], [13112, 19483, null], [19483, 21069, null], [21069, 28406, null], [28406, 32866, null], [32866, 32935, null], [32935, 33842, null], [33842, 40961, null], [40961, 48427, null], [48427, 54737, null], [54737, 62240, null], [62240, 64577, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5582, true], [5582, 13112, null], [13112, 19483, null], [19483, 21069, null], [21069, 28406, null], [28406, 32866, null], [32866, 32935, null], [32935, 33842, null], [33842, 40961, null], [40961, 48427, null], [48427, 54737, null], [54737, 62240, null], [62240, 64577, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64577, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64577, null]], "pdf_page_numbers": [[0, 5582, 1], [5582, 13112, 2], [13112, 19483, 3], [19483, 21069, 4], [21069, 28406, 5], [28406, 32866, 6], [32866, 32935, 7], [32935, 33842, 8], [33842, 40961, 9], [40961, 48427, 10], [48427, 54737, 11], [54737, 62240, 12], [62240, 64577, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64577, 0.09722]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b6039fe5aac8e8180e06eb87232b37fcf9401770
Endowing NoSQL DBMS with SQL Features Through Standard Call Level Interfaces Óscar Mortáguia Pereira, David Simões, Rui L. Aguiar Instituto de Telecomunicações DETI – University of Aveiro Aveiro, Portugal {omp, david.simoes, ruilaa}@ua.pt Abstract— To store, update and retrieve data from database management systems (DBMS), software architects use tools, like call-level interfaces (CLI), which provide standard functionalities to interact with DBMS. However, the emerging of NoSQL paradigm, and particularly new NoSQL DBMS providers, lead to situations where some of the standard functionalities provided by CLI are not supported, very often due to their distance from the relational model or due to design constraints. As such, when a system architect needs to evolve, namely from a relational DBMS to a NoSQL DBMS, he must overcome the difficulties conveyed by the features not provided by NoSQL DBMS. Choosing the wrong NoSQL DBMS risks major issues with components requesting non-supported features. This paper focuses on how to deploy features that are not so commonly supported by NoSQL DBMS (like Stored Procedures, Transactions, Save Points and interactions with local memory structures) by implementing them in standard CLI. Keywords—NoSQL; SQL; databases; middle-ware; call level interfaces; software architecture. I. INTRODUCTION Critical data are mostly kept and managed by database management systems (DBMS). To store, update and retrieve data from DBMS, software architects use software tools to ease the development process of business tiers. Among these, we emphasize call-level interfaces (CLI) [1], which provide an API that allows an application to call methods that propagate to the database. CLI try to build on the commonalities between DBMS and provide a set of methods that encompass these common aspects. Because all DBMS are inherently different, CLI have two main issues to deal with. Firstly, the way of accessing distinct DBMS is different (protocol, format, query language, etc.), which means every DBMS must have its own implementation, which converts the standard API calls to the proper DBMS format. Secondly, DBMS have different features and support different techniques. CLI try to encompass the most common and often seen capabilities, but some DBMS do not support all of them, while others can support features that CLI do not support. Most NoSQL DBMS, for example, do not support transactions, unlike most relational DBMS. This paper focuses on how to handle this variety of features supported by different DBMS and focusing primarily on features provided by CLI but not supported by the DBMS. These consist on: 1) transactions, 2) the execution of database functions (like stored procedures) and, finally, 3) interactions with local memory structures, containing data retrieved from the database. We provide a framework that allows a system architect to simulate nonexistent features on the underlying DBMS for client applications to use, transparently to them. It is expected that this research can contribute to minimize the efforts of system architects when DBMS do not support what are considered key features. The remainder of this paper is organized as follows. Section II presents the state of the art and Section III describes some key functionalities of a CLI (in this case, JDBC). Section IV formalizes our framework, Section V shows our proof of concept and Section VI evaluates our framework. Finally, Section VII presents our conclusions. II. STATE OF THE ART There is some work done to bridge the gap between NoSQL and SQL. There have been some solutions focused on providing JDBC drivers to particular DBMS, like [2]–[6], using the DBMS’s own query language (usually SQL-like). The authors’ approach is to create an incomplete JDBC implementation that delegates CLI requests to the DBMS API and converts the results of queries into JDBC’s ResultSet. There is also work done on translating SQL to the NoSQL paradigm [7]–[12], which allows clients to perform ANSI-SQL commands on NoSQL DBMS. These proposals create a SQL query interface for NoSQL systems, which allow SQL queries to be automatically translated and executed using the underlying API of the data sources. Work has also been done in an attempt to standardize the access API for NoSQL DBMS. Atzeni et al. [13] propose a common programming interface to NoSQL systems (and also to relational ones) called SOS (Save Our Systems). Its goal is to support application development by hiding the specific details of the various systems. There is also research on cross-database tools that depend heavily on JDBC’s features and that cannot be used with NoSQL because their implementations are not complete [14]. To the best of our knowledge, there has not been work done with the goal of implementing CLI’s features on drivers and DBMS that do not support them. We expect that our framework positively contributes to overcome the gap between NoSQL and SQL. III. BACKGROUND Like previously stated, CLI try to build on the commonalities between DBMS and provide a set of methods that encompass these common aspects. These methods include, for example, reading data from the database, executing commands on it or performing transactions. Data manipulation commands are usually called ‘CRUD Expressions’, which stand for Create, Read, Update and Delete Expressions, and represent the most common ways to handle data in a DBMS. CLI also usually allow the modification of data on local memory structures, modifications which are propagated to the database transparently, without a client having the need to execute any CRUD expression. While most full-fledged DBMS have several complete CLI implementations (Microsoft SQL Server, MySQL, among others), some relational DBMS do not (SQLite, for instance) and most NoSQL DBMS do not either. A. Java Database Connectivity The Java Database Connectivity (JDBC) [15] is a CLI API for Java. Because of Java’s portable nature, it has been the most popular development language for NoSQL DBMS and, as such, JDBC is the most popular CLI for NoSQL DBMS, even though it is oriented towards relational DBMS. JDBC Drivers typically return “Connection” objects, which are then used to perform operations on the database. The “Connection” object has a given set of capabilities, which include the creation of CRUD statements to be executed on the database, the creation of statements that call functions inside the DBMS (like Stored Procedures) and the usage of transactions (with commits, roll-backs and save points). Associated with connections, are ResultSets (RS), which are local memory structures retrieved with "select" queries and representing rows on the database. These use cursors to iterate through their set of data and also allow a set of capabilities, which include retrieval of values from the current row and, if the RS is defined as ‘updatable’, the insertion or deletion of a row and the modification of the current row’s values. These interactions are going to be referred to as ‘Indirect Access Mode (IAM) Interactions’ through the remainder of this paper. Listing 1 shows the creation of a statement stmt, the retrieval of data from table table1 and how it is kept in the RS (rs). Applications are then allowed to update their content. In this case the attribute attributeName was updated to value and then the modification was committed. We can see how the update is done without the use of any CRUD expression. ``` stmt = conn.createStatement(); rs = stmt.executeQuery("select * from table1"); rs.update("attributeName", value) rs.commit(); ``` Listing 1. A query and the update of a value using JDBC. The features that the driver supports can be further grouped by category: statements (with or without parameters), execution of database functions (stored procedures or user-defined functions), transactions (and save points), iteration through RS, retrieval of values from RS and IAM interactions. Some of these features are implemented by all drivers (executing statements on the database, for example). However, the execution of database functions, transactions, save points or IAM interactions is not implemented by some DBMS, depending on their architecture or features. These categories are, then, the focus of this paper. IV. IMPLEMENTATION FORMALIZATION To implement these features, there are several options. The first is to create another driver, wrapping the original one, where the methods call the original methods or implement those not supported; the second is to have a server-side middleware layer that intercepts the CLI calls, allows the supported ones and redirects the non-supported ones; the third is to have the clients connecting to the server through a regular socket connection and the server either forwards those requests to a JDBC driver connected to the DBMS or it executes functions from our framework. While wrapping the driver in another may seem the simplest option (clients can simply use the driver as they usually would, as there is no need for middleware layers to intercept the driver requests or for clients to change the way they connect to the DBMS), it presents some security vulnerabilities, which will be explained further ahead, and also forces the clients to use the modified driver. The second option is the most transparent for clients, but forces a complex implementation on the server, to intercept the JDBC calls and act accordingly, in an imperceptible way for the client. The third option eliminates the need for clients to have any CLI dependency on their code and the server merely acts as a relay from clients to the DBMS. This makes for a simpler implementation of the server logic, but is not transparent to clients. The last two approaches are similar, consisting in a middleware layer able to identify client requests, and any of them are viable. It’s up the each system architect to decide which approach suits his needs the best. For the remainder of this paper, the middleware layer (intercepting the CLI calls) where the extra logic is implemented will be referred to as the “barrier”. All the client requests must go through the barrier to access the database. It is able to intercept requests and, instead of forwarding them to the DBMS, provide its own implementation and return the appropriate results to the clients, transparently. A. Execution of Database Functions A Stored Procedure (SP) is a subroutine available to applications that access a relational DBMS. Typical use for SP include data validation (integrated into the DBMS) or access control mechanisms. Furthermore, they can consolidate and centralize logic that was originally implemented in applications. Extensive or complex processing that requires execution of several SQL statements is moved into stored procedures, and all applications call the procedures. SP are similar to User-Defined Functions (UDF), with a few minor differences (how many arguments are returned, ability to use try-catch blocks, among others). If a DBMS does not allow the definition of SP or UDF, these can be implemented on the barrier as a server-side function that calls a group of SQL statements and operations, which are executed together and, therefore, simulate a SP. By doing so, it is possible to simulate most of the behaviors of SP or UDF. To detect when the functions that simulate SP should be called, there are multiple ways. A simple one would be to give the client the ability to call a SP by the use of a keyword (e.g., `exec storedProcedure1`), where the SP name would be the function name. On the barrier, when the `exec` keyword was detected, a function with the same name as the one requested would be called with the arguments supplied and the results would be returned to the client. B. Transactions A transaction symbolizes a unit of work performed within a database, and treated in a coherent and reliable way independent of other transactions. Transactions in a database environment have two main purposes: to provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system failure; to provide isolation between programs accessing a database concurrently. A database transaction, by definition, must be atomic, consistent, isolated and durable (ACID). In other words, transactions provide an "all-or-nothing" proposition, stating that each work-unit performed in a database must either complete in its entirety or have no effect whatsoever. Furthermore, the system must isolate each transaction from other transactions, results must conform to existing constraints in the database, and transactions that complete successfully must get written to durable storage. The implementation of transactions is a complex engineering problem, heavily dependent on the DBMS architecture. We present a solution that works with most DBMS, but which also depends on the database schema. Our proposal is defined by, after a transaction has been started, executing the statements in the usual manner, but registering them in a list. If a rollback is ensued, using the list, the changes are undone and return the database to its original state. The implementation of transactions inherently involves the implementation of the ACID properties to a group of statements. Consistency and durability cannot be implemented on the barrier, because these are guaranteed by default by the database itself. To implement atomicity, along with a list of all the executed actions, there is a need for a list of all the statements that reverse those actions, hereafter referred to as the list of reversers. All inserts are reversed with a delete, all deletes with an insert, updates with updates and selects do not have to be reverted. To reverse the performed actions, the reverser list of actions must be executed backwards. One needs to pay attention to the database schema and, if an insert triggers other inserts (for logging purposes, for example), all of their reversers must be added to the reverser list. The same happens for cascading updates and deletes. These kinds of mechanisms are mostly common in relational databases, where transactions are natively supported, so we expect few practical cases where these become relevant. As an example, imagine a simple transaction consisting of a bank transfer: money is withdrawn from Account A and deposited in Account B. The money in A cannot fall under 0 and the transaction first deposits the money in B and then withdraws from A. Currently, A has 40€, B has 0€ and the transaction is executed for a transfer of 50€. When the deposit is made, B has 50€ and A still has 40€. Here, the increment is registered in the barrier and the reverser (subtracting 50€) is also registered. Then, the transaction tries to withdraw 50€ from A but it fails, because the value would go below 0. Here, the transaction is rolled back and the actions in the reverser list would be executed, subtracting the money added to B and ending the transaction. The fact that CRUD expressions are kept on the barrier also has an advantage when implementing transactions. If they were on the client-side, inside the JDBC driver, it would be the client to keep a list of the reversers needed in case of a rollback. If indeed there was a need for a rollback, the client might not have had the permissions to execute those actions and, therefore, could not rollback. To solve this, special permissions would need to be set for this case and that could lead to vulnerabilities that an attacker could take advantage of. Formally, our definition states that a transaction is composed of actions (which trigger cascading actions), which affect data in the database. Atomicity in a transaction can be implemented if and only if: for any action in any transaction, all the cascading actions can be found; for any action (or cascading action) in any transaction, there is a reverser; the execution of a reverser undoes all and only the changes made by the original action. Implementing isolation can be done through the use of a single lock (a semaphore or a monitor), which serializes multiple transactions. This concept can be further extended with multiple locks (for example, one for each table), which would allow concurrent transactions if these transactions interacted with (in this example) different tables. Multiple locks can, however, lead to deadlock issues; to avoid them, either one of the transactions has to be reverted (deadlock avoidance/detection) or the locks must all be done at the start of the transaction and must occur in an ordered manner (deadlock prevention). Because the DBMS does not support transactions natively, reverting one is a heavy process, and it can lead to starvation, depending on which transaction is selected to be rolled-back. The second option, however, decreases the system concurrency and also implies knowing a priori all the tables where changes will be made, which might not be possible. As an example of the first solution, consider Transaction A, which wants to change Table t1 and Table t2; and Transaction B, which wants to interact with Table t2 and Table t1, in the opposite order. When the transactions start, both try and lock their first table. Then, one of them, let’s say A, tries to lock the second table and blocks (because the other transaction, B, has that table locked). When $B$ tries to lock its second table, a deadlock situation is detected (because $A$ has that table locked) and one of the transactions is rolled back. At that point, the remaining transaction can proceed (because there are no locks on any of the tables now, except its own) and when it is finished, the rolled-back transaction can proceed as well. As an example of the second solution, consider the same situation. When the transaction starts, both transactions try and lock both tables. To avoid deadlocks, the locks must be done in an ordered manner. In this case, they could be done alphabetically, and not in the order the transactions use them. Both transactions would try to lock $t1$ and then $t2$. The level at which the locks are implemented is also an important choice. With higher levels, implementation is easier, performance is better but concurrency is worse. As an example, imagine a database-level lock. This single lock allows only a single transaction at a time. The cases where such implementation would work in a practical manner are very few. SQLite is one of them, given it is a local file meant to be used by a single process at a time. Locks at table level, for example, would have better concurrency; clients can perform transactions on different tables at the same time. However, with many clients or very few tables, this level might still be too restrictive. Some NoSQL DBMSs may not, however, have the concept of ‘tables’. Relational DBMS use row-level locks on transactions, which are ideal in the sense that many clients can perform transactions on the same table, just not on the same piece of data they are handling. However, some DBMSs may not support row distinction and, inherently, may not support row-level locks. Some NoSQL DBMSs also feature millions of rows, which could lead to severe performance issues. C. Savepoints in Transactions Assuming transactions have been implemented, the ability to create a save point in a transaction and to roll back to that save point is a simple matter of defining points in the reverser list and only reverting the actions and freeing locks up until that point. D. IAM Interactions IAM interactions on a RS consist on the update of values in a row and on the insertion or deletion of rows. By default, a RS’s concurrency type is read only and does not allow any of these. If it does, its type is updatable. To create a RS that allows IAM interactions, a client must specify it when creating the statement object to execute CRUD expressions on the database. The barrier can intercept the creation of this statement object and, if the updatable type is not supported, wrap the RS that is generated inside our framework’s RS, which simulates the necessary behaviors to allow the insertion, update and deletion of rows. This RS is the one supplied to the client, where he will be able to execute IAM interactions as usual. Our first approach was the following: when clients attempt to perform actions on the RS (say, inserting a new row), the actions would be converted and executed like a normal query and the RS would be reset to show the new changes. This had a noticeable performance decay (performing a CRUD expression for the action and another to update the RS) and led to problems when multiple clients were querying the same tables, due to the fact that by resetting the RS, we were re-querying the table fetching results affected by other clients. Because of this, we followed a different approach where our original RS is never changed (and where we do not have to re-query the table). Values that are updated or inserted are converted to a CRUD expression, inserted in the table and kept in memory. If the client tried to access those values, our framework would present them from memory, without the need to query data from the table. Deleted rows are kept track off and ignored when iterating through the values. ![Figure 1. Our data structure for IAM interactions with row 2 highlighted.](image-url) Figure 1 shows an example of our data structure. When the client requested the RS, rows $A$ to $D$ were queried. The client inserted $E$ and $F$ and deleted $A$, $C$ and $E$. Rows $E$ and $F$ are kept in memory, in an array. Rows $A$, $C$ and $E$ are flagged as deleted. When the client requests the row with index 2, which corresponds to the value $D$, our implementation iterates through the RS, ignoring deleted rows, until we reach the intended row. With this implementation, there is no unnecessary performance decay (there is no need to re-query the data) and there are no concurrency issues (each client can modify their own RS and their inserted/deleted values do not affect the other clients’ RS). This behavior mimics a relational driver implementation’s behavior. V. PROOF OF CONCEPT This section describes how the mentioned features were implemented. A. Execution of Database Functions To define a SP in a common DBMS, an administrator needs to define four aspects: the name, the input, the output and the actual function of the SP. As such, it is expected that the same aspects must be defined to implement SP on the barrier. By defining an abstract class `Barrier_CallableStatement` (implementing the `CallableStatement` class), which takes as input a JDBC connection, a name String and an array of arguments (that can be either input or output), the SP framework is defined. To specify the SP, a developer instantiates this abstract class and implements the `execute()` method, which will contain all the SP logic and is the only method that needs to change depending on the SP and the underlying database. As such, all four original aspects are defined and the execution of a SP can be intercepted by the framework, which will then execute the custom implementation, instead of trying to run it on the database, which would throw an error. As an example, Listing 2 shows a stored procedure getEmpName, defined in MySQL, which returns the name of an employee based on his ID, by querying a table Employees, with the fields id and name. ``` SELECT CREATE PROCEDURE 'Emp'. 'getEmpName' (IN EMP_ID INT, OUT EMP_NAME VARCHAR(255)) BEGIN SELECT name INTO EMP_NAME FROM Employees WHERE ID = EMP_ID; END ``` Listing 2. Stored Procedure in MySQL. The usage of this SP in a Java client with a JDBC connection is shown in Listing 3. A CallableStatement is created from the connection object with the SP invocation SQL string. The input and output parameters are defined, the procedure is executed and output parameter is read. We can see that there are two separate definitions of the same procedure, one in the database and one in the client. Because the SP and the barrier are in the same place, this redundant definition should not be needed. When implementing a SP, a developer extends it to the Barrier_CallableStatement class and defines the number of arguments and the SP name. The execute method contains all the logic (reading input, processing and setting the output). ```java CallableStatement stmt = connection.prepareCall ("call EMP.getEmpName (?, ?)"); stmt.registerOutParameter(2, VARCHAR); stmt.execute(); enployeeName = stmt.getString(2); ``` Listing 3. Invocation of the SP in a Java Client. The usage of this class is quite similar to the original invocation of the SP and is shown in Listing 4. There is no need to register which parameters are output and, in this case, there was no need to refer to the SP name. The barrier, however, keeps a list of the implemented SP and, when it detects a command like exec getEmpName, matches the desired SP, executes it and returns the corresponding results. ```java CallableStatement stmt = new SP_getEmpName(conn); stmt.setInt(1, employeeID); stmt.execute(); enployeeName = stmt.getString(2); ``` Listing 4. Invocation of the SP implementation in a Java Client. B. Execution of Transactions Transactions are implemented with an abstract class, just like SPs. Each implementation depends on the underlying DBMS, and the methods that must be overridden are the methods that return the reversers. When the execution of a statement is requested, the reverser is determined and the corresponding lock is activated. Then, the statement is executed and the reverser is added to the list of actions in the current transaction. The commit statement releases the locks being used and clears the list of reversers. In case it is not possible to find the reverser (for example, if the row about to be inserted is not unique and there is no way to delete this specific row, then there is no reverser to be found), an exception is thrown and the statement is not executed. If the statement’s execution throws an error, the reverser is not added to the list. A rollback executes all the reversers in the list backwards and clears the list. If deadlock is detected, one of the transactions is rolled-back. The choice of which transaction is selected can be random, by most recent transaction (first come, first served logic), by which transaction detected the deadlock or by which transaction is easiest to rollback (while better on performance, can lead to starvation). The ease of rollback can be determined by the size of the actions list or, if actions have different impacts, by the calculation of the impact of all the actions currently in the list. Listing 5 shows an example transaction in a Java client. The database has a table tb, on which are inserted two tuples, A with ID=1 and B with ID=2. The A value is committed and therefore, is stored in the database. The B value is rolled-back and is not stored in the database. Assuming the table was empty at the start of the transaction, by the end of the transaction, a query should show only a single value, A. ```java conn.setAutoCommit(false); try (Statement stmt = conn.createStatement()) { stmt.executeUpdate("insert into tb values (1, 'A')"); conn.commit(); stmt.executeUpdate("insert into tb values (2, 'B')"); conn.rollback(); } conn.setAutoCommit(true); ``` Listing 5. A simple transaction in a Java client. As before, a transaction using our framework is expected to function in a similar manner. Listing 6 shows the same transaction, using our framework for SQLite. The creation of the Barrier_Transaction object matches the setting of Auto Commit Mode to false in Listing 5 and it handles the creation of the statement object. Then, A is inserted and committed, B is inserted and rolled-back and the transaction is closed, which matches the setting of Auto Commit Mode to true. ```java Barrier_Transaction trans = new Barrier_TransactionSQLite (conn); trans.execute("insert into tb values (1, 'A')"); trans.commit(); trans.execute("insert into tb values (2, 'B')"); trans.rollback(); trans.close(); ``` Listing 6. A transaction using our Framework. In a SQL compliant DBMS, when each insert action is requested, the corresponding delete action is created. For the A value, for example, the reverser is delete from tb where id=1 and name='A'. On DBMS with different query languages (like Hive), the parsing and creation of reversers would be different. Hence the fact that each DBMS and each schema have its own implementation of the Barrier_Transaction class; schemas with trigger actions need different implementations from schemas without them. There is also a need for a client-wide lock system to be deployed to enforce isolation, as well as a system to prevent deadlocks when handling concurrent transactions. Corbett et al. [16] have shown that there are many different solutions for deadlock detection, both distributed and centralized. In our case, the barrier layer acts as a centralized lock system to guarantee isolation among transactions and, as such, it makes sense to use a centralized deadlock prevention mechanism. We have used table-wide locks with MySQL and Hive and row-level locks with Redis and MongoDB. When a client performs an action during a transaction, the appropriate reverser is found. Immediately after it is determined, the lock is requested to the Concurrency Handler (CH), which requires two things: the URI of the lock (in this case, table names or row keys) and the URI of the requesting process. The CH uses semaphores as locks and creates them as transactions request them. In other words, the first time a client requests the lock for table T1, that semaphore is created. Any following requests for that table use that semaphore. This removes the need for our framework to know the database schema and be flexible for any lock-level. The CH does not lock the semaphore immediately. Before doing so, it checks whether a deadlock situation would be created. It does so by using a graph structure that represents subjects (each transaction) and objects (each table/row) and checking for cycles. If a cycle were to be created by this lock request, that a deadlock situation would emerge [17]. Figure 2 shows an example using the previously mentioned example of transactions A and B trying to lock tables T1 and T2. We can see that we have a deadlock situation. B’s request to T1 leads to its owner, A, which has requested T2, which belongs to B. In our implementation, this situation would never be reached. Assuming A requested T2 before B requested T1, when B made its request, the cycle would be revealed and the transaction would be restarted. When it rolled-back, its locks would be released, which would allow A to proceed. When A finished, B would be able to lock both tables and execute as well. ![Figure 2. A graph representation of a deadlock situation.](image) While this example only features two subjects and two objects, the concept can be easily extended for multiple subjects and objects. By solving the deadlock issues, the use of these locks enforces isolation among each transaction. Given that the transactions cannot access another transaction’s table/row, then values being read, modified, deleted or created are safe from concurrent modifications. ### C. Save Points A client can set a save point in a transaction and roll-back only up to that save point, which allows for fine-grained control when handling transaction exceptions. Listing 7 shows a transaction that inserts 3 values but only rolls-back one of them (value B). Our save point implementation is based in the Barrier Transaction class and, logically, depends on each underlying DBMS. To use save points, a client executes all the methods, just like previously shown, on the Barrier_Transaction object. ```java beginTransaction(false); try (Statement stmt = conn.createStatement()) { stmt.executeUpdate("insert into tb values (1, 'A')"); stmt.executeUpdate("insert into tb values (2, 'B')"); conn.setSavepoint("savepoint_one"); stmt.executeUpdate("insert into tb values (3, 'C')"); conn.commit(); } conn.setAutoCommit(true); ``` Listing 7. A transaction with savepoints in a Java client. ### D. IAM Interactions Interactions on a RS imply that the RS has been requested with the updatable type, which enables them. By default, the type is read only. Listing 8 shows how a Java client can create a RS, update the third row, insert a new one and delete the second one. ```java Statement stmt = connection.createStatement(ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE); ResultSet rs = stmt.executeQuery("SELECT * FROM person"); ``` Listing 8. A Java client creating a RS to perform IAM interactions. Because it is our aim to provide as much transparency as possible, the biggest difference is the object request, which uses our wrapper class, as shown in Listing 9. We do not need to specify the type (updatable or read only) because we assume the DBMS only supports read only. ```java ResultSet rs = new BarrierResultSetSQLite(conn, "SELECT * FROM person", ResultSet.TYPE_SCROLL_SENSITIVE); ``` Listing 9. A Java client creating a RS with our SQLite implementation. Our implementation depends on the underlying DBMS, because it depends on the query syntax, as previously stated. ### VI. Evaluation To demonstrate the soundness of our approach, we have selected four DBMS with different paradigms: SQLite, a relational DBMS; Hive v1.0, a NoSQL DBMS; MongoDB v3.0.2, a document-oriented DBMS; and Redis, one of the most popular key-value DBMS. We expect that our concepts are general enough to be adapted to most NoSQL DBMS. As a basis for comparison, we also used a full-fledged relational DBMS, MySQL, which served as a comparison basis between our barrier implementation and an actual database engine implementation. The lock-levels were set as tables for all tests, although Redis and MongoDB could use row IDs. Because Redis does not provide a functional and up-to-date JDBC driver, we developed our own driver, which uses the Redis Java API and converts a simple query language into Redis’ operations. The choice of which DBMS to use was done taking into account two main aspects: diversity (it is our goal to show that our concept works with any kind of DBMS, and so it is important to have both relational and non-relational DBMS, as well as different NoSQL paradigms) and popularity (it is important to choose widely used DBMS). We tested our framework in a 64-bit Linux Mint 17.1 with an Intel i5-4210U @ 1.70GHz, 8GB of RAM and a Solid State Drive. All the databases were deployed locally, including Hive, which was set-up together with Hadoop as a single-node cluster in this machine. The tests performed include the insertion, update and deletion of values both outside and inside a transaction from our framework. <table> <thead> <tr> <th>Operation</th> <th>SQLite</th> <th>MongoDB</th> <th>Hive</th> </tr> </thead> <tbody> <tr> <td>Insert</td> <td>100</td> <td>749</td> <td>754</td> </tr> <tr> <td></td> <td>500</td> <td>3699</td> <td>4031</td> </tr> <tr> <td></td> <td>1000</td> <td>7993</td> <td>8494</td> </tr> <tr> <td>Update</td> <td>100</td> <td>755</td> <td>758</td> </tr> <tr> <td></td> <td>500</td> <td>4628</td> <td>4096</td> </tr> <tr> <td></td> <td>1000</td> <td>8248</td> <td>8423</td> </tr> <tr> <td>Delete</td> <td>100</td> <td>757</td> <td>746</td> </tr> <tr> <td></td> <td>500</td> <td>3648</td> <td>3784</td> </tr> <tr> <td></td> <td>1000</td> <td>7592</td> <td>7715</td> </tr> <tr> <td>Select</td> <td>100</td> <td>755</td> <td>107</td> </tr> <tr> <td></td> <td>500</td> <td>755</td> <td>107</td> </tr> <tr> <td></td> <td>1000</td> <td>295</td> <td>292</td> </tr> </tbody> </table> Table 1. A comparison of times (in ms) to perform operations in different DBMS with our framework’s transactions enabled and disabled. Tests (shown in Table 1) show an expected performance decay on all databases. In SQLite, the decay amounts to approximately 8% of the original time taken for the insert operation, 2% for the update operation and 3% for the delete operation. In MongoDB, the decay is much stronger, with over 200% decay for inserts, 60% for updates and 80% for deletes. For Hive, tests could only involve up to 100 rows, due to time restraints. However, Hive shows good results of about 5% decay in inserts, 3% in updates and 5% in deletes. Tests for MySQL and Redis were not considered to have relevant information and were not included. Because queries are an integral part of the transaction process, the decay is directly related to the ratio between the time taken for queries and operations for each DBMS. This explains why MongoDB has a much stronger decay than SQLite or Hive. Tests were also conducted in regards to database-stored functions, rollbacks and IAM interactions. The tests show that the performance decay is directly related to the performance of a CRUD expression on the database: if a statement takes 10 seconds, an IAM interaction will also take 10 seconds, plus a residual processing time (about 5 to 10 microseconds). The same relation exists for rollbacks and stored procedures which involve operations in the database. VII. CONCLUSION We have proposed a framework that implements some features on a JDBC driver that are not usually implemented using NoSQL drivers. Our proposal includes a model to use our framework in a way that allows concurrent clients to perform atomic and isolated transactions, as well as IAM interactions and database functions, like stored procedures. We have proven our concept with SQLite, Hive, Redis and MongoDB, and we expect our model to be general enough that it can be extended to other DBMS, relational or NoSQL. Our performance results show that the use of our framework can be suitable for a real-life scenario. However, work is underway to perform a more in-depth performance evaluation of the different DBMS, with different test conditions, which will be adequate to each DBMS’s architecture and design and provide a more insightful analysis. Work is also underway to add fault tolerance to our proposal; our framework does not currently provide atomicity in case of hardware failures. In conclusion, our framework positively contributes to overcome the gap between NoSQL and SQL. It helps system architects to simulate key relational DBMS features on NoSQL databases that do not natively support them and eases the transition from a DBMS to another, by abstracting underlying features of the DBMS. REFERENCES
{"Source-Url": "http://ksiresearchorg.ipage.com/seke/seke15paper/seke15paper_70.pdf", "len_cl100k_base": 8232, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 28974, "total-output-tokens": 9687, "length": "2e13", "weborganizer": {"__label__adult": 0.00028228759765625, "__label__art_design": 0.0002340078353881836, "__label__crime_law": 0.00026702880859375, "__label__education_jobs": 0.0005164146423339844, "__label__entertainment": 4.804134368896485e-05, "__label__fashion_beauty": 0.0001074075698852539, "__label__finance_business": 0.0002532005310058594, "__label__food_dining": 0.00027632713317871094, "__label__games": 0.000408172607421875, "__label__hardware": 0.0007948875427246094, "__label__health": 0.0004398822784423828, "__label__history": 0.00019550323486328125, "__label__home_hobbies": 6.860494613647461e-05, "__label__industrial": 0.0003249645233154297, "__label__literature": 0.00015151500701904297, "__label__politics": 0.00015938282012939453, "__label__religion": 0.0003273487091064453, "__label__science_tech": 0.0205535888671875, "__label__social_life": 5.745887756347656e-05, "__label__software": 0.00838470458984375, "__label__software_dev": 0.96533203125, "__label__sports_fitness": 0.0002046823501586914, "__label__transportation": 0.00035190582275390625, "__label__travel": 0.00016760826110839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41087, 0.02626]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41087, 0.50561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41087, 0.91841]], "google_gemma-3-12b-it_contains_pii": [[0, 4977, false], [4977, 10951, null], [10951, 17410, null], [17410, 23160, null], [23160, 28880, null], [28880, 34466, null], [34466, 41087, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4977, true], [4977, 10951, null], [10951, 17410, null], [17410, 23160, null], [23160, 28880, null], [28880, 34466, null], [34466, 41087, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41087, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41087, null]], "pdf_page_numbers": [[0, 4977, 1], [4977, 10951, 2], [10951, 17410, 3], [17410, 23160, 4], [23160, 28880, 5], [28880, 34466, 6], [34466, 41087, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41087, 0.06667]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
20161c4c6edad9076efb0dd98ce31e6392648cf8
Case Study: Zend Server on IBM i Vermont Gas Systems Work Order Management System About John Valance - **Independent consultant** - Specialty is helping iSeries shops develop web applications, and related skills - Training, mentoring, consultation and coding - **25+ years IBM midrange development experience** - **12+ years of web development experience** - Web scripting language of choice = PHP - **Frequent presenter on web development topics** - **Trainer for Zend Technologies** - Teaches Intro to PHP for RPG programmers - Zend Certified Engineer - **Contact:** [johnv@jvalance.com](mailto:johnv@jvalance.com) - 802-355-4024 Agenda • Project Background ▸ Company ▸ IBM i / PHP ▸ Project Goals • Demo of System ▸ recorded • Technical Details ▸ major architectural features Vermont Gas Systems - Natural Gas Utility in north western Vermont - Regulated Business - Serves Burlington and surrounding areas - About 40,000 customers (small utility) - Expanding service territory IBM i (aka: iSeries, System i, i5, AS/400) - IBM's legacy midrange platform - Precursors date back to 1970s -80s - Unique design, legendary reliability and longevity - High technology investment protection - Proprietary, integrated, object-based operating system (i/OS) - Many built-in business capabilities - Database (DB2), Security, Communications, - Technology Independent Machine Interface - OS has survived many hardware technology changes - Proprietary programming language = RPG - Vast portfolio of 3rd party business applications, in all industries - Typically character-based terminal applications (aka green-screen) - Runs enterprise applications, backbone for many medium to large businesses Screen-shot: IBM i Sign On Display PHP on IBM i - 2005: Zend/IBM partnership - Zend/PHP = Strategic technology for IBM i - IBM i = Strategic platform for Zend - Simple Installation includes Zend Server CE, licenses for Zend Studio - PHP widely acceptance by IBM i community - more accessible for RPG programmers than Java - Demand for education is growing - Typically used to access legacy DB2 tables - Toolkit for IBM i - Access native IBM i system objects - Call RPG programs Background - Reasons for a New System - Project Scope: Rewrite VGS Work Order Mgmt System in PHP - Need to replace 15 y/o legacy, green-screen application, with numerous enhancements - Old system needed many enhancements, was difficult to maintain - Lots of redundant code, hard-coded value lists - Technical goals of new system: - Modern, intuitive user interface - Solid, modular code base - Easily maintainable and extensible Why Custom PHP Solution? - Search for new system included several vendor offerings - One vendor offering was $1 Million+ solution, plus services - Excellent solution, but... - Would have completely disrupted existing business processes - In the end, decision to use custom coded solution - Browser-based interface - PHP / Zend Server running on IBM i - Consultant (me) to lead project and write code Benefits of Custom Approach • Get exactly what they needed • Low risk • Low cost System functionality based on old system, with enhancements • Meet pressing business requirements • Greatly improved interface and code base • Incremental improvements to business processes • Allows for future enhancements • **Maintain Information on Work Orders** - Anything related to construction or maintenance of gas lines - Gas Transmission lines (regional pipelines) - Gas Mains (local pipelines) - Gas Services (pipe to premises) - Repair Leaks (on mains / services) - Retire / Replace main or service - Work Order Type = describes type of line and work to be performed • **Primary users are Engineering Department** - Accounting also researches W/O issues Ancillary Information - Cleanup details - Pipe Exposures - Sewers on site - DIMP Data Collection (Key project goal) - Distribution Integrity Management Plan Interfaces - Marketing - Sales Applications (New Installs) - Accounting Activity / Project Costs - Time Sheets / Payroll - Vendor Billing Life Cycle of Work Orders - Create Work Order - Print Work Orders - Several different formats for each WO type - Complete Work Orders - Entry of data collected in field - Close Work Orders - Post details to accounting Technical Features of the System - Hybrid Object Oriented / Procedural Design - Model / View / Controller code organization - Selective use of Zend Framework components - Powerful custom helper classes - Form generation - CRUD SQL generation - Highly consistent, easily maintainable code base - Table searching/filtering/download - Single record CRUD screens - User maintainable list management (Drop-downs) - Security - Authentication with IBM i UID/PSWD - Robust access control at user or group level - Use of JOD Reports to generate PDF reports and printed forms Demo of System VGS Work Order Management System Main Menu Work Orders - All Work Orders - Leaks - Create New Work Order - Create Multiple SR/ST Orders - Pipe Exposures - Cleanups - Sewers - Plastic Pipe Failures - Mechanical Fitting Failures Tables - Projects - Pipe Types - Drop Down Lists Security - Profiles - User/Group Xref - Authorities - Profile Authorities - Test with profile: [Enter user name] [Change user] Reporting - Cancelled Work Orders download - W/O Inventory Reconciliation System - Log Out Use of Zend Framework - Began with desire to implement Zend Framework based architecture - Not full implementation of Framework - R & D period factored into schedule - After 2 months, could not get all features working - Zend_DB_Table, Zend_DB_Select, Zend Paginator - Created home-grown versions of these components - VGS_DB_Table, VGS_DB_Select, VGS Paginator - Used many of the concepts of Framework, but had to "roll our own" classes - Very consistent design, using OO - No front controller, but using a common layout.php on all pages - Handles authentication, error checking, page layout for every screen. - Extensions to Zend_Form - VGS_Form extends Zend_Form - VGS_Form_Helper - Automatically sets many form attributes from DB2 metadata General Application Screens Structure - **layout.php** - Included in each view script - Functions to show header and footer - Error handling: `set_error_handler()` - Session management: `session_start()` - Check `isset($_SESSION[\'userId\'])` - If not, redirect to loginCtrl.php **Two main types of applications:** - **Search Screens** - Multi-record, filtered, paginated record lists - **Edit/Display Record screens** - Single-record screens, for CRUD operations Cookie-cutter design for building list and record screens Search Screens - components - **Nav Buttons bar** - Main menu (or close pop-up) - Download (optional) - Create record (optional) - + Custom buttons (app specific) - **Filter bar** - **Paginator bar** - **Search Results** (records matching search criteria) - **Download capability** - Uses filters entered for search ### Example – Work Order Search #### VGS Work Order Management System <table> <thead> <tr> <th>WO#</th> <th>Description</th> <th>Type</th> <th>Status</th> <th>Municipality</th> <th>Entry Date</th> </tr> </thead> <tbody> <tr> <td>67527</td> <td>16 Stanley Ct</td> <td>SI - Service New Construction</td> <td>Pending</td> <td>St. Albans City</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67526</td> <td>15 Stanley Ct</td> <td>SI - Service New Construction</td> <td>Pending</td> <td>St. Albans City</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67525</td> <td>13 Stanley Ct</td> <td>SI - Service New Construction</td> <td>Pending</td> <td>St. Albans City</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67523</td> <td>37 Saint Albans Rd</td> <td>SI - Service New Construction</td> <td>Pending</td> <td>Swanton Town</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67517</td> <td>Starbird Road</td> <td>MI - Main New Construction</td> <td>Pending</td> <td>Jericho</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67516</td> <td>7 Ducks Ct</td> <td>SM - Service Maintenance</td> <td>Pending</td> <td>Milton Town</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67515</td> <td>Thorpe Ave. Ext @ Fairfax St.</td> <td>MM - Main Maintenance</td> <td>Pending</td> <td>St. Albans City</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67514</td> <td>79 Walnut St</td> <td>SM - Service Maintenance</td> <td>Pending</td> <td>St. Albans City</td> <td>Aug 28, 2013</td> </tr> <tr> <td>67513</td> <td>31 Diamond St #1</td> <td>SM - Service Maintenance</td> <td>Pending</td> <td>St. Albans City</td> <td>Aug 27, 2013</td> </tr> <tr> <td>67511</td> <td>2 Canada St</td> <td>SM - Service Maintenance</td> <td>Pending</td> <td>Swanton Village</td> <td>Aug 26, 2013</td> </tr> <tr> <td>67510</td> <td>Wheatley Ct</td> <td>TI - New Construction Tie-In</td> <td>Pending</td> <td>Colchester</td> <td>Aug 26, 2013</td> </tr> <tr> <td>67509</td> <td>Wheatley Ct</td> <td>MI - Main New Construction</td> <td>Pending</td> <td>Colchester</td> <td>Aug 26, 2013</td> </tr> <tr> <td>67508</td> <td>Woodside Dr for # 92</td> <td>TI - New Construction Tie-In</td> <td>Pending</td> <td>Colchester</td> <td>Aug 26, 2013</td> </tr> </tbody> </table> Edit/Display Record screens - components • **Nav Buttons bar** - Main menu (or close pop-up) - Save (not in display mode) - Cancel - Return to Search List - + Custom buttons (app specific) • **Form** - Field Groups (defined in Form class) Record detail screens - CRUD • **Modes:** - Create, Read (display), Update (edit), Delete • **Create and Edit modes** - Same validations, by default - If form data changed, warning on cancel or navigate away • **Display mode** - Same form layout as Create/Edit - Set all inputs to readonly, class="disabled", tabindex="-1", and clearValidators() • **Delete mode** - Show record output only, format like display mode - Replace "Save button" with red "Delete button" - Confirm dialog before delete Three primary objects are involved, related to table being maintained: - **Form object, extends Zend_Form, e.g.:** - class Project_Form extends Zend_form { ... } - **Form Helper object (custom code, very helpful)** - class VGS_FormHelper { ... } - Retrieves DB2 metadata for table fields on form - Sets appropriate filters, validators, attributes based on metadata - **Table object, extends VGS_DB_Table, e.g.:** - class Project_Master extends VGS_DB_Table { ... } • HTML forms are unwieldy, and contain many co-dependent elements ‣ <form> tag ‣ <input> tags of various types ‣ Field labels ‣ CSS and other rendering attributes ‣ Error messages ‣ Data values • Server side code must handle: ‣ Form loading (new record defaults/ existing record values) ‣ Validations based on data types, business rules ‣ Reloading form with values and messages, until valid input Zend Framework provides classes to simplify/abstract form processing - Handle all aspects of form definition and processing in PHP - Use `render()` method to display the form. - Largely avoids HTML, controls everything via structured program code. - **Zend_Form** ~= `<form>` - **Zend_Form_Element** ~= `<input>` - Zend_Form object contains multiple Zend_Form_Element objects $form = new Zend_Form; $form->setAction('resource/process') ->setMethod('post'); $username = new Zend_Form_Element_Text('username'); $username->addValidator('alnum') ->addValidator('regex', false, array('/^[a-z]/')) ->setRequired(true) ->addFilter('StringToLower'); $form->addElement($username); Zend_Form methods - **reset()** - clear form or load defaults for new record in create mode - can override in derived class, with application appropriate values - **populate($dataArray)** - load existing record in edit/display/delete mode - keys of $dataArray are names of form elements - good idea to use DB field names - **validate()** - perform custom validations on inputs - will run all validators added - you can create custom, reuseable validators - can override in derived class with custom validations - **render()** - generate HTML <form> and <input> tags for all elements - handles all attributes, including value="..." - decorators handle positioning (HTML container tags) abstract class VGS_Form extends Zend_Form { ... } VGS_Form - $conn - $fh - $inputs - $mode - $return_point - $screen_title - $valid - __construct($conn) - activate() : void - buildLookup($fieldName) : void - convertDateFormat($dateStr, $fromFmt, $toFmt, $padLen=8) - createRecord() : void - deleteRecord() : void - fixDateInput(string) : void - fixDateOutput(string, $blnOutputOnly=false) : void - fixTimeInput(string) : void - fixTimeOutput(string) : void - getTimeStampOutputFormat($timestamp) : void - isCreateMode() : void - isDeleteMode() : void - isInquiryMode() : void - isUpdateMode() : void - loadScreen() : void - preProcessFormInputs() : void - processScreen() : void - renderFieldGroup($fieldGroupName) : void - renderFormButtons() : void - renderFormHeaderMessage() : void - renderFormHiddens() : void - renderFormJS() : void - renderFormTop() : void - retrieveRecord() : void - returnToCaller() : void - setDateFormat($field) : void - setInputFormatsForDB2() : void - setInputFormatsForScreen($data) : void - updateRecord() : void - validate() : void VGS_Form in a Nutshell - Extends Zend_Form (we get all that goodness!), plus... - Basic data access methods (create, update, retrieve, delete) - General DB2 data filtering (screen <-> database) - Form rendering - Field groups, elements, buttons, messages, hidden fields, JavaScripts. - Form processing - Initialization, loading from database, validation, database update, redirecting after success - Boolean mode methods - isInquiryMode(), isUpdateMode(), etc... - Attaching input helpers / popups - Lookups (foreign key table search popup) - Date pickers VGS_Form_Helper - Loads metadata from DB2 for one or more tables - Builds form elements from the metadata for selected fields - Adds appropriate data type validations, filters and attributes, and labels to form elements - This saves a lot of tedious coding - Allows definition of field groups (boxes of fields on screen) - Fields can be added to form by passing comma-separated list of field names. - Can define additional validations and attributes for lists of fields - Required entry, output only, override input type, attribs class VGS_FormHelper VGS_FormHelper - $elements - $fieldGroups - $metaData - $mode - $view - __construct() - addCustomMetaData($name, $text, $type, $length, $precision="") : void - addElement($name, $element) : void - addElementsFromMetaData() : Count - addFieldGroup($fieldList, $fieldGroup, $caption) : void - addMetaData($conn, $table) : void - buildElementFromMetaData($elemMeta) : void - getElements() : void - getMetaData() : void - getObjetLibrary($object, $objType) : void - renderFieldGroup($fieldGroup, Zend_Form) : void - setDescription($fieldName, $description) : void - setElementDataTypeFilter(Zend_Form_Element, $meta) : void - setElementProperty($name, $property, $value) : void - setElementsProperties($namesList, $property, $value) : void - setMultiOptions($fieldName, $optionsList) : void - splitNames(string) : true class VGS_FormHelper { /** The $metaData array will be used to generate labels, * filters and validators for the form elements automatically. * @var array */ public $metaData = array(); /** Contains an array of Zend_Form_Element to include on the form * @var array */ private $elements = array(); /** * Holds an array describing the field groupings for display * @var array */ public $fieldGroups = array(); class WO_SewerForm extends VGS_Form { private $wswRec; // Record array for existing w/o sewer record private $woRec; // Complete w/o record for the related w/o // Key fields for this sewer record private $woNum; private $wswSeqNo; public function __construct( $conn, $woNum ) { parent::__construct ( $conn ); {constructor stuff}... $this->fh->addMetaData($conn, "WO_SEWER"); $this->setDefaultElements( ); } } public function addMetaData($conn, $table) { $schema = self::getObjectLibrary($table, '*FILE'); $syscols = new VGS_DB_Table($conn); $query = "select * from qsys2/syscolumns where table_schema = '$schema' and system_table_name = '$table' "; $rs = $syscols->execListQuery($query); while ($sysColumn = db2_fetch_assoc($syscols->stmt)) { // Add each column's metadata to the master metadata array $colName = $sysColumn['COLUMN_NAME']; $this->metaData[$colName] = $sysColumn; } } ### WSW_ADDRESS - MetaData - **UPPER_CASE** = fields from QSYS2/SYSCOLUMNS - **Green** = attributes used to generate form elements - **Red** = attributes added by custom code <table> <thead> <tr> <th>FIELD</th> <th>VALUE</th> </tr> </thead> <tbody> <tr> <td>COLUMN_NAME</td> <td>WSW_ADDRESS</td> </tr> <tr> <td>TABLE_NAME</td> <td>WO_SEWER</td> </tr> <tr> <td>TABLE_OWNER</td> <td>ORCOM</td> </tr> <tr> <td>ORDINAL_POSITION</td> <td>3</td> </tr> <tr> <td>DATA_TYPE</td> <td>VARCHAR</td> </tr> <tr> <td>LENGTH</td> <td>100</td> </tr> <tr> <td>NUMERIC_SCALE</td> <td></td> </tr> <tr> <td>IS_NULLABLE</td> <td>N</td> </tr> <tr> <td>IS_UPDATABLE</td> <td>Y</td> </tr> <tr> <td>LONG_COMMENT</td> <td></td> </tr> <tr> <td>HAS_DEFAULT</td> <td>Y</td> </tr> <tr> <td>COLUMN_HEADING</td> <td>Address</td> </tr> <tr> <td>STORAGE</td> <td>102</td> </tr> <tr> <td>NUMERICPRECISION</td> <td></td> </tr> <tr> <td>CCSID</td> <td>37</td> </tr> <tr> <td>TABLE_SCHEMA</td> <td>WORKORDT</td> </tr> <tr> <td>COLUMN_DEFAULT</td> <td>''</td> </tr> <tr> <td>CHARACTER_MAXIMUM_LENGTH</td> <td>100</td> </tr> <tr> <td>CHARACTER_OCTET_LENGTH</td> <td>100</td> </tr> <tr> <td>NUMERIC_PRECISION_RADIX</td> <td></td> </tr> <tr> <td>DATETIME_PRECISION</td> <td></td> </tr> <tr> <td>COLUMN_TEXT</td> <td>Address</td> </tr> <tr> <td>SYSTEM_COLUMN_NAME</td> <td>WSWADDR</td> </tr> <tr> <td>SYSTEM_TABLE_NAME</td> <td>WO_SEWER</td> </tr> <tr> <td>SYSTEM_TABLE_SCHEMA</td> <td>WORKORDT</td> </tr> <tr> <td>USER_DEFINED_TYPE_SCHEMA</td> <td></td> </tr> <tr> <td>USER_DEFINED_TYPE_NAME</td> <td></td> </tr> <tr> <td>IS.IDENTITY</td> <td>NO</td> </tr> <tr> <td>IDENTITY_GENERATION</td> <td></td> </tr> <tr> <td>IDENTITY_START</td> <td></td> </tr> <tr> <td>IDENTITY_INCREMENT</td> <td></td> </tr> <tr> <td>IDENTITY_MINIMUM</td> <td></td> </tr> <tr> <td>IDENTITY_MAXIMUM</td> <td></td> </tr> <tr> <td>IDENTITY_CYCLE</td> <td></td> </tr> <tr> <td>IDENTITY_CACHE</td> <td></td> </tr> <tr> <td>IDENTITY_ORDER</td> <td></td> </tr> <tr> <td>group</td> <td>sewer</td> </tr> <tr> <td>include</td> <td>1</td> </tr> <tr> <td>label-class</td> <td>required</td> </tr> <tr> <td>required</td> <td>1</td> </tr> </tbody> </table> public function setDefaultElements( ) { $this->fh->addFieldGroup($flWO, 'wo', 'Work Order Details'); $this->fh->setElementsProperties($flWO, 'output_only', true); $flSewer = 'WSW_SEQNO, WSW_ADDRESS, WSW_CITY, WSW_LOCATED_PRIOR, WSW_SEWER_SIZE,' 'WSW_SEWER_MATERIAL, WSW_SEWER_TYPE, WSW_SEPARATION_FROM_GAS, WSW_INSPECTION_NEEDED, WSW_INSPECT_REASON'; $this->fh->addFieldGroup($flSewer, 'sewer', 'Sewer Information'); $this->fh->setElementsProperties( 'WSW_SEQNO', 'output_only',true); $this->fh->setElementsProperties( 'WSW_ADDRESS, WSW_CITY, WSW_SEWER_TYPE, WSW_SEPARATION_FROM_GAS' 'required', true); $this->fh->setElementsProperties( 'WSW_LOCATED_PRIOR, WSW_INSPECTION_NEEDED', 'input_type', 'y/n'); $this->fh->setElementsProperties( 'WSW_CITY, WSW_SEWER_TYPE, WSW_SEPARATION_FROM_GAS', 'input_type', 'select'); $this->fh->setElementsProperties( 'WSW_NOTES', 'input_type', 'textarea'); etc... // This creates Zend_Form_Elements out of the meta data $this->fh->addElementsFromMetaData(); $this->addElements($this->fh->getElements()); // Add a drop-down (<select>) list for Town $dd = new Code_Values_Master($this->conn); $ddList = $dd->getCodeValuesList('TOWN', ''); $this->fh->setMultiOptions('WSW_CITY', $ddList); etc... <table> <thead> <tr> <th>Values</th> <th>Drop Down ID</th> <th>Description</th> <th>Status</th> <th>Codes</th> <th>Last Update</th> </tr> </thead> <tbody> <tr> <td>REPAIR_METHOD_EQUIP</td> <td>Repair method/equipment for maintenance W/Os</td> <td>Active</td> <td>11</td> <td></td> <td></td> </tr> <tr> <td>CONDITION_FOUND</td> <td>Pipe condition found for maintenance W/Os</td> <td>Active</td> <td>10</td> <td></td> <td></td> </tr> <tr> <td>CONS_TYPES_MI</td> <td>Construction Types for Main Install</td> <td>Active</td> <td>4</td> <td></td> <td></td> </tr> <tr> <td>CONS_TYPES_SI</td> <td>Construction Types for Service Install</td> <td>Active</td> <td>5</td> <td></td> <td></td> </tr> <tr> <td>PP_CONTACT</td> <td>Default contact info for plastic pipe failure</td> <td>Active</td> <td>3</td> <td></td> <td></td> </tr> <tr> <td>PP_MANUFACTURER</td> <td>Plastic Pipe Failure - Manufacturer</td> <td>Active</td> <td>3</td> <td></td> <td></td> </tr> <tr> <td>SEWER_TYPE</td> <td>Sewer Types</td> <td>Active</td> <td>5</td> <td></td> <td></td> </tr> <tr> <td>PREMISE_STS</td> <td>Premise record status codes (UPRM)</td> <td>Active</td> <td>10</td> <td></td> <td></td> </tr> <tr> <td>METHOD_OF_CONSTRUCTION</td> <td>Method of Construction</td> <td>Active</td> <td>6</td> <td></td> <td></td> </tr> <tr> <td>WCN_EMAIL_ADDR</td> <td>Email address to receive W/O cancellation</td> <td>Active</td> <td>3</td> <td></td> <td></td> </tr> <tr> <td>WCN_REASON_CODE</td> <td>W/O Cancellation Reasons</td> <td>Active</td> <td>9</td> <td></td> <td></td> </tr> <tr> <td>PRI_CUSTOMER_EXCAVATED</td> <td>Customer Excavated</td> <td>Active</td> <td>4</td> <td></td> <td>Thu, Jan 12 2012 at 10:14:19 am</td> </tr> <tr> <td>RATECLASS</td> <td>Rate Classes</td> <td>Active</td> <td>6</td> <td></td> <td>Mon, Nov 21 2011 at 11:14:32 pm</td> </tr> <tr> <td>WPE_PIPE_COMPOSITION</td> <td>Pipe Composition</td> <td>Active</td> <td>6</td> <td></td> <td>Fri, Oct 07 2011 at 04:19:28 pm</td> </tr> <tr> <td>LK_SURVEY_TYPE</td> <td>Leak Survey Type</td> <td>Active</td> <td>5</td> <td></td> <td>Fri, Oct 07 2011 at 04:17:14 pm</td> </tr> <tr> <td>AP_PROFILE_TYPE</td> <td>Authority Profile Type</td> <td>Active</td> <td>2</td> <td></td> <td>Fri, Aug 12 2011 at 04:12:56 pm</td> </tr> <tr> <td>AP_PERMISSION</td> <td>Authority/Profile Permissions</td> <td>Active</td> <td>4</td> <td></td> <td>Fri, Aug 12 2011 at 12:57:28 pm</td> </tr> <tr> <td>AD_Functional_AREA</td> <td>Grouping ID for Authority Definitions</td> <td>Active</td> <td>2</td> <td></td> <td>Fri, Aug 12 2011 at 12:50:20 pm</td> </tr> <tr> <td>WO_RETIRED_MAIN</td> <td>Retired With or At Main</td> <td>Active</td> <td>3</td> <td></td> <td>Tue, Jul 26 2011 at 11:03:53 am</td> </tr> <tr> <td>LK_EVENTS</td> <td>Leak Collateral Incidents</td> <td>Active</td> <td>3</td> <td></td> <td>Mon, Jul 25 2011 at 04:56:37 pm</td> </tr> <tr> <td>WO_METER_LOCATION</td> <td>Meter Location</td> <td>Active</td> <td>4</td> <td></td> <td>Mon, Jul 18 2011 at 03:29:02 pm</td> </tr> <tr> <td>PRI_STATUS</td> <td>Project Status</td> <td>Active</td> <td>5</td> <td></td> <td>Tue, Jul 12 2011 at 02:29:24 pm</td> </tr> <tr> <td>WO_FLOW_LIMITER_SIZE</td> <td>Valid Flow Limiter Sizes</td> <td>Active</td> <td>3</td> <td></td> <td>Fri, Jul 08 2011 at 12:32:24 pm</td> </tr> <tr> <td>MF_MECHANICAL_FITTING</td> <td>Mech Fitting Failure Fitting</td> <td>Active</td> <td>4</td> <td></td> <td>Thu, Jul 07 2011 at 02:48:01 pm</td> </tr> <tr> <td>MF_SUPPLEMENTAL_REPORT</td> <td>Mech Fitting Supplemental Report</td> <td>Active</td> <td>2</td> <td></td> <td>Wed, Jul 06 2011 at 03:41:47 pm</td> </tr> <tr> <td>MF_INITIAL_REPORT</td> <td>Mech Fitting Initial Report - Y/N</td> <td>Active</td> <td>2</td> <td></td> <td>Wed, Jul 06 2011 at 03:39:45 pm</td> </tr> </tbody> </table> ### User Maintainable Drop Down Lists (for `<select>`) #### VGS Work Order Management System **Drop Down Values Search** <table> <thead> <tr> <th>List</th> <th>Seq#</th> <th>Code</th> <th>Value</th> <th>Description</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>SEWER_TYPE</td> <td>1.0000</td> <td>SEWER</td> <td>Sewer</td> <td></td> <td>Active</td> </tr> <tr> <td>SEWER_TYPE</td> <td>2.0000</td> <td>SEPTIC</td> <td>Septic</td> <td></td> <td>Active</td> </tr> <tr> <td>SEWER_TYPE</td> <td>3.0000</td> <td>BOTH</td> <td>Both (sewer/septic)</td> <td></td> <td>Active</td> </tr> <tr> <td>SEWER_TYPE</td> <td>4.0000</td> <td>NONE</td> <td>None</td> <td></td> <td>Active</td> </tr> <tr> <td>SEWER_TYPE</td> <td>5.0000</td> <td>UNKOWN</td> <td>Unknown</td> <td></td> <td>Active</td> </tr> </tbody> </table> Defines all table specific data access methods - Parent class (VGS_DB_Table) Encapsulates basic DB2 functionality: - Public methods used with Search lists and VGS_Paginator: - execListQuery($queryString, $bindParms = array()) - execScrollableListQuery(VGS_DB_Select $select) - getRowCount(VGS_DB_Select $select) - Public methods used to retrieve and update single record (detail screens) - execUpdate($queryString, $bindParms = array()) - fetchRow($queryString, $bindParms = array()) - Security: - checkPermissionByCategory( $category, $mode ) - Ensure user has authority to table for given mode VGS_DB_Table - SQL generator methods autoCreateRecord(array $inputs) avoUpdateRecord(array $inputs) avoDeleteRecord(array $inputs) • Automatically build SQL statements from form inputs • If form fields change, never need to modify SQL statements • Never have to align field names and values • Uses bound parameters - no need to align parameter markers (?) • Huge time saver; ensures accurate updates without coding • Bound parameters prevents SQL injection attacks VGS_DB_Table - public attributes Name of the database table, specified in UPPERCASE. public $tableName; Field names prefix for this table (eg: 'WO_'); Used to extract the update fields from form inputs. public $tablePrefix; Array of key field names for this table public $keyFields; Boolean = table includes audit fields (Default = true) public $hasAuditFields; Boolean = physical record delete is allowed. (Default = false) public $isRecordDeletionAllowed; - With above attributes, system can automatically create SQL for create, update, delete, and set audit fields appropriately. Example of VGS_DB_Table based class class WO_Sewer extends VGS_DB_Table { public function __construct($conn) { parent::__construct($conn); $this->tableName = 'WO_SEWER'; $this->tablePrefix = 'WSW_'; $this->keyFields = array('WSW_WO_NUM', 'WSW_SEQNO'); $this->hasAuditFields = true; $this->isRecordDeletionAllowed = true; } public function create( $rec ) { $this->checkPermissionByCategory('WO', 'CREATE'); $rec['WSW_SEQNO'] = $this->getNextSewerNum($rec['WSW_WO_NUM']); $this->autoCreateRecord($rec); } public function update( $rec ) { $this->checkPermissionByCategory('WO', 'UPDATE'); $this->autoUpdateRecord($rec); } } Example - Update Sewer Details ### Work Order Details <table> <thead> <tr> <th>Field</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>W/O Number</td> <td>64901</td> </tr> <tr> <td>W/O Type</td> <td>SI</td> </tr> <tr> <td>W/O Status</td> <td>CMP</td> </tr> <tr> <td>Date Completed</td> <td>Oct 08, 2012</td> </tr> </tbody> </table> ### Method of Construction <table> <thead> <tr> <th>Method</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>MOC Trench</td> <td></td> </tr> <tr> <td>MOC HDD</td> <td></td> </tr> <tr> <td>MOC Hog</td> <td></td> </tr> <tr> <td>MOC Plowed</td> <td></td> </tr> <tr> <td>MOC Other</td> <td></td> </tr> </tbody> </table> ### Sewer Information <table> <thead> <tr> <th>Field</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Address</td> <td>19 Whisper Ln</td> </tr> <tr> <td>City</td> <td>Milton Village</td> </tr> <tr> <td>Sewer Located Prior</td> <td></td> </tr> <tr> <td>Sewer Size</td> <td></td> </tr> <tr> <td>Sewer Material</td> <td></td> </tr> <tr> <td>Sewer Type</td> <td>Septic</td> </tr> <tr> <td>Separation From New Gas Install</td> <td>Rear/Opposite Side</td> </tr> <tr> <td>Inspection Needed</td> <td></td> </tr> </tbody> </table> ### Comments <table> <thead> <tr> <th>Field</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>General Notes</td> <td></td> </tr> </tbody> </table> ### Sewer Record Maintenance Info <table> <thead> <tr> <th>Field</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Created by User ID</td> <td>KIM</td> </tr> <tr> <td>Date/Time Created</td> <td>Tue, Oct 05 2012 at 02:06:15 pm</td> </tr> <tr> <td>Last Changed by User ID</td> <td></td> </tr> <tr> <td>Date/Time Last Changed</td> <td></td> </tr> </tbody> </table> Sewer Update - example input values - Inputs array passed to VGS_Form->autoUpdateRecord( $inputs ) - Red elements are ignored: prefix not = 'WSW_' ``` WSW_WO_NUM = 64901 WSW_SEQNO = 1 popup = mode = update a = update return_point = /wotest/controller/wswListCtrl.php WO_TYPE = SI WO_STATUS = CMP WO_DATE_COMPLETED = Oct 08, 2012 WSW_ADDRESS = 19 Whisper Ln WSW_CITY = MLV WSW_LOCATED_PRIOR = N WSW_SEWER_SIZE = WSW_SEWER_MATERIAL = WSW_SEWER_TYPE = SEPTIC WSW_SEPARATION_FROM_GAS = REAR WSW_INSPECTION_NEEDED = N WSW_INSPECT_REASON = in rear WSW_MOC_TRENCH = Y WSW_MOC_HDD = Y WSW_MOC_HOG = N WSW_MOC_PLOWED = N WSW_MOC_OTHER = WSW_NOTES = ``` • Builds SQL update string, and array of values to bind, then... • $this->execUpdate($sql, $values); \$sql: update WO_SEWER set WSW_ADDRESS = ?, WSW_CITY = ?, WSW_LOCATED_PRIOR = ?, WSW_SEWER_SIZE = ?, WSW_SEWER_MATERIAL = ?, WSW_SEWER_TYPE = ?, WSW_SEPARATION_FROM_GAS = ?, WSW_INSPECTION_NEEDED = ?, WSW_INSPECT_REASON = ?, WSW_MOC_TRENCH = ?, WSW_MOC_HDD = ?, WSW_MOC_HOG = ?, WSW_MOC_PLOWED = ?, WSW_MOC_OTHER = ?, WSW_NOTES = ?, WSW_CHANGE_USER = ?, WSW_CHANGE_TIME = current timestamp where WSW_WO_NUM = ? AND WSW_SEQNO = ? \$values: <table> <thead> <tr> <th>WSW_ADDRESS</th> <th>19 Whisper Ln</th> </tr> </thead> <tbody> <tr> <td>WSW_CITY</td> <td>MLV</td> </tr> <tr> <td>WSW_LOCATED_PRIOR</td> <td>N</td> </tr> <tr> <td>WSW_SEWER_SIZE</td> <td></td> </tr> <tr> <td>WSW_SEWER_MATERIAL</td> <td></td> </tr> <tr> <td>WSW_SEWER_TYPE</td> <td>SEPTIC</td> </tr> <tr> <td>WSW_SEPARATION_FROM_GAS</td> <td>REAR</td> </tr> <tr> <td>WSW_INSPECTION_NEEDED</td> <td>N</td> </tr> <tr> <td>WSW_INSPECT_REASON</td> <td>in rear</td> </tr> </tbody> </table> | WSW_MOC_TRENCH | Y | | WSW_MOC_HDD | Y | | WSW_MOC_HOG | N | | WSW_MOC_PLOWED | N | | WSW_MOC_OTHER | | | WSW_NOTES | | | WSW_CHANGE_USER | JVALANCE | | WSW_WO_NUM | 64901 | | WSW_SEQNO | 1 | • **Red** = audit fields - automatically inserted • **Green** = key fields, put at end of $values array JOD Reports - Java OpenDocument Reports - http://jodreports.sourceforge.net/ - Open source, Java-based report template tool - Create documents and reports in OpenDocument Text format from templates - Templates can be visually composed using the OpenOffice.org Writer word processor - These documents can then be converted to PDF, Word and RTF with JODConverter <WO> <WO_NUM>65093</WO_NUM> <NEED_BY_DATE>Thu Oct 25, 2012</NEED_BY_DATE> <WO_DESCRIPTION>47 Barrett St</WO_DESCRIPTION> <WO_PREMISE_NUM>27590</WO_PREMISE_NUM> <OWNERS_NAME>Valance, John G</OWNERS_NAME> <OWNERS_PHONE>802-355-4024</OWNERS_PHONE> <METER_NO>25018</METER_NO> <WO_SPECIAL_INSTRUCTION /> <WO_TYPE_DESC>Service New Construction</WO_TYPE_DESC> <WO_GL_COST>VGSBS-1071-0-65093</WO_GL_COST> <PT_DESCRIPTION>Plastic Service 1"</PT_DESCRIPTION> <ESTLEN>.00</ESTLEN> <ESTHRS>.00</ESTHRS> <CURBSTOP>N</CURBSTOP> <FLWLIM>800</FLWLIM> <WO_TOWN_NAME>So. Burlington</WO_TOWN_NAME> <MAINPIPE_TYPE /> </WO> **VERMONT GAS SYSTEMS, INC.** **Work Order for Input field** <table> <thead> <tr> <th>Date Issued</th> <th>Input field</th> </tr> </thead> <tbody> <tr> <td>Location</td> <td>Input field - Input field</td> </tr> <tr> <td>Needed By</td> <td>Input field</td> </tr> <tr> <td>Type of Work</td> <td>Input field</td> </tr> <tr> <td>Owner's Name</td> <td>Input field</td> </tr> <tr> <td>Owner's Phone#</td> <td>Input field</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Charge Acct#</th> <th>Input field</th> </tr> </thead> <tbody> <tr> <td>ROW#</td> <td>Input field</td> </tr> <tr> <td>Pipe Type</td> <td>Input field Input field</td> </tr> <tr> <td>Est. Length</td> <td>Input field</td> </tr> <tr> <td>Est. Crew Hrs</td> <td>Input field</td> </tr> <tr> <td>Curb Stop</td> <td>Input field</td> </tr> <tr> <td>Flow Limiter</td> <td>Input field</td> </tr> </tbody> </table> **Special Instructions:** Input field **Dig Safe Auth. No.:** By: Begin Dig Date: __________ **Soil:** Sand Clay Rocky Packing: Loose Hard **Soil Moisture:** Dry Moist Wet N/A **Exposed MAIN Info** Exposed MAIN Info Size: 3/4” 1” 1 1/2” 2” 4” 6” Pipe Type: DPE8000 DPE1000 PPE8300 EndoPE Coating Type: FlexClad Pritec Xtru Scotch Coating External Cond. Good Fair Poor Smooth Internal Cond. Good Fair Poor Smooth Pile C20 Reading: __________ Main Depth: _______ Clean Up Required: (Dimensions) - Topsoil - Blacktop - Concrete - Gravel/Stone Comments: Input field **Input Field** ``` javascript $(WO.WO_ENTRY_DATE) ``` **Excavation Permit #** **Edit** - OK - Cancel - Help **SlsApp#** Input field **Premise#** Input field **Address** Input field **Apt#** Input field **Sls Pers.** Input field public function retrievePDF($xml, $request) { // urlenocde and concatenate the POST arguments $postargs = 'outputFormat=pdf&model=' . urlencode($xml); // $request is JOD-compliant URL for appropriate report template $session = curl_init ( $request ); curl_setopt($session, CURLOPT_POSTFIELDS, $postargs); // this is body of POST curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // return response return $response; } Contact Information: John Valance johnv@jvalance.com 802-355-4024
{"Source-Url": "http://static.zend.com/topics/CaseStudy-PHP-IBMi-20130904.pdf", "len_cl100k_base": 10352, "olmocr-version": "0.1.53", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 76364, "total-output-tokens": 11152, "length": "2e13", "weborganizer": {"__label__adult": 0.0004656314849853515, "__label__art_design": 0.0008015632629394531, "__label__crime_law": 0.0004200935363769531, "__label__education_jobs": 0.0035724639892578125, "__label__entertainment": 9.1552734375e-05, "__label__fashion_beauty": 0.0001742839813232422, "__label__finance_business": 0.0045166015625, "__label__food_dining": 0.0005235671997070312, "__label__games": 0.0005331039428710938, "__label__hardware": 0.0013713836669921875, "__label__health": 0.00034689903259277344, "__label__history": 0.00022089481353759768, "__label__home_hobbies": 0.0003211498260498047, "__label__industrial": 0.0048065185546875, "__label__literature": 0.0001703500747680664, "__label__politics": 0.00019490718841552737, "__label__religion": 0.0003132820129394531, "__label__science_tech": 0.00484466552734375, "__label__social_life": 7.724761962890625e-05, "__label__software": 0.024169921875, "__label__software_dev": 0.95068359375, "__label__sports_fitness": 0.0002608299255371094, "__label__transportation": 0.0008840560913085938, "__label__travel": 0.0003025531768798828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33522, 0.01272]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33522, 0.07724]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33522, 0.59288]], "google_gemma-3-12b-it_contains_pii": [[0, 83, false], [83, 644, null], [644, 804, null], [804, 1006, null], [1006, 1727, null], [1727, 1762, null], [1762, 2218, null], [2218, 2661, null], [2661, 3076, null], [3076, 3383, null], [3383, 3848, null], [3848, 4152, null], [4152, 4377, null], [4377, 4956, null], [4956, 5474, null], [5474, 6241, null], [6241, 6785, null], [6785, 7116, null], [7116, 9596, null], [9596, 9849, null], [9849, 9849, null], [9849, 10365, null], [10365, 10843, null], [10843, 11260, null], [11260, 11640, null], [11640, 11952, null], [11952, 12656, null], [12656, 13722, null], [13722, 14289, null], [14289, 14824, null], [14824, 15662, null], [15662, 16096, null], [16096, 16569, null], [16569, 17110, null], [17110, 19265, null], [19265, 20006, null], [20006, 20652, null], [20652, 24174, null], [24174, 24836, null], [24836, 25451, null], [25451, 25918, null], [25918, 26509, null], [26509, 27245, null], [27245, 28588, null], [28588, 29234, null], [29234, 30595, null], [30595, 30957, null], [30957, 31644, null], [31644, 33008, null], [33008, 33457, null], [33457, 33522, null]], "google_gemma-3-12b-it_is_public_document": [[0, 83, true], [83, 644, null], [644, 804, null], [804, 1006, null], [1006, 1727, null], [1727, 1762, null], [1762, 2218, null], [2218, 2661, null], [2661, 3076, null], [3076, 3383, null], [3383, 3848, null], [3848, 4152, null], [4152, 4377, null], [4377, 4956, null], [4956, 5474, null], [5474, 6241, null], [6241, 6785, null], [6785, 7116, null], [7116, 9596, null], [9596, 9849, null], [9849, 9849, null], [9849, 10365, null], [10365, 10843, null], [10843, 11260, null], [11260, 11640, null], [11640, 11952, null], [11952, 12656, null], [12656, 13722, null], [13722, 14289, null], [14289, 14824, null], [14824, 15662, null], [15662, 16096, null], [16096, 16569, null], [16569, 17110, null], [17110, 19265, null], [19265, 20006, null], [20006, 20652, null], [20652, 24174, null], [24174, 24836, null], [24836, 25451, null], [25451, 25918, null], [25918, 26509, null], [26509, 27245, null], [27245, 28588, null], [28588, 29234, null], [29234, 30595, null], [30595, 30957, null], [30957, 31644, null], [31644, 33008, null], [33008, 33457, null], [33457, 33522, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33522, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33522, null]], "pdf_page_numbers": [[0, 83, 1], [83, 644, 2], [644, 804, 3], [804, 1006, 4], [1006, 1727, 5], [1727, 1762, 6], [1762, 2218, 7], [2218, 2661, 8], [2661, 3076, 9], [3076, 3383, 10], [3383, 3848, 11], [3848, 4152, 12], [4152, 4377, 13], [4377, 4956, 14], [4956, 5474, 15], [5474, 6241, 16], [6241, 6785, 17], [6785, 7116, 18], [7116, 9596, 19], [9596, 9849, 20], [9849, 9849, 21], [9849, 10365, 22], [10365, 10843, 23], [10843, 11260, 24], [11260, 11640, 25], [11640, 11952, 26], [11952, 12656, 27], [12656, 13722, 28], [13722, 14289, 29], [14289, 14824, 30], [14824, 15662, 31], [15662, 16096, 32], [16096, 16569, 33], [16569, 17110, 34], [17110, 19265, 35], [19265, 20006, 36], [20006, 20652, 37], [20652, 24174, 38], [24174, 24836, 39], [24836, 25451, 40], [25451, 25918, 41], [25918, 26509, 42], [26509, 27245, 43], [27245, 28588, 44], [28588, 29234, 45], [29234, 30595, 46], [30595, 30957, 47], [30957, 31644, 48], [31644, 33008, 49], [33008, 33457, 50], [33457, 33522, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33522, 0.2108]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
186ffa2bbf2bf8b185af230ff5993bfd415ffcf0
Categorical Views on Computations on Trees (Extended Abstract) Ichiro Hasuo\textsuperscript{1}, Bart Jacobs\textsuperscript{1}, and Tarmo Uustalu\textsuperscript{2} \textsuperscript{1} Institute of Computing and Information Sciences, Radboud University Nijmegen, Postbus 9010, NL-6500 GL Nijmegen, The Netherlands http://www.cs.ru.nl/{ichiro, bart} \textsuperscript{2} Institute of Cybernetics at Tallinn University of Technology, Akadeemia tee 21, EE-12618 Tallinn, Estonia http://www.cs.ioc.ee/~tarmo Abstract. Computations on trees form a classical topic in computing. These computations can be described in terms of machines (typically called tree transducers), or in terms of functions. This paper focuses on three flavors of bottom-up computations, of increasing generality. It brings categorical clarity by identifying a category of tree transducers together with two different behavior functors. The first sends a tree transducer to a coKleisli or biKleisli map (describing the contribution of each local node in an input tree to the global transformation) and the second to a tree function (the global tree transformation). The first behavior functor has an adjoint realization functor, like in Goguen’s early work on automata. Further categorical structure, in the form of Hughes’s Arrows, appears in properly parameterized versions of these structures. 1 Introduction Tree transformations are functions sending trees to trees. Such transformations are of broad interest in computing, notably in language processing, and are often studied in relation to certain types of realizing machines. They form a classical topic. In this paper we aim at a systematic study of phenomena and constructions related to \textit{bottom-up} tree transformations. We first sketch two motivating observations: these will later be given detailed accounts. \textit{Behavior-realization adjunction} It is a fundamental idea in computer science that we associate with a “computable” function a “machine” which realizes it. Those machines which realize tree transformations are often called \textit{tree transducers} and have been extensively studied as a continuation of automata theory: see \cite{10, 11, 2} and also more recently \cite{1}. Here comes our first question. What do we mean by saying “a machine \(c\) realizes a transformation \(l\)”? Given a transformation \(l\), is there a machine which realizes it? Is there a canonical choice among such realizers? We shall answer these questions, following the idea of Goguen’s \textit{behavior-realization adjunction} \cite{3} for (a more elementary setting of) automata, see also \cite{9}. Tree functions from local behaviors We start with relabeling bottom-up tree transformations that only change labels on each node of an input tree, like $l$ on the left. $$\begin{array}{c} a_0 \xrightarrow{a} a_1 \\ b_0 \xrightarrow{b} b_1 \end{array} \quad \begin{array}{c} a_2 \xrightarrow{a_3} a_4 \\ b_2 \xrightarrow{b_3} b_4 \end{array} \quad \begin{array}{c} \downarrow \quad \downarrow \end{array} \quad \begin{array}{c} a_0 \xrightarrow{a_1} a_3 \\ b_0 \xrightarrow{b_1} b_3 \end{array} \quad \begin{array}{c} a_2 \xrightarrow{a_4} a_3 \\ b_2 \xrightarrow{b_4} b_4 \end{array}$$ Now let us consider another function $k$ which operates on the same input trees as $l$ does but returns the root label of the output tree of $l$. That is, $k = \epsilon \circ l$ where $\epsilon$ extracts the root label. It may seem that $k$ (which shall be called a local behavior) carries less information than $l$ does—$\epsilon$ throws information away. But when $l$ is relabeling bottom-up we can recover $l$ from $k$. Our main contribution is to give an account of some classes of tree transformations in terms of diagrams like this: $$\begin{array}{c} \text{TT} \xrightarrow{W} \text{TF} \\ \downarrow \text{Real} \quad \downarrow \text{LBeh} \end{array} \quad \begin{array}{c} \text{TF} \xrightarrow{C} \text{TF} \end{array}$$ Here, $\text{TF}$ and $\text{LBeh}$ are two behavior functors from the category of tree transducers (“machines”) to tree functions and to local behaviors. For relabelings, the functor $W$ is an isomorphism: this embodies the equivalence of the two behaviors $\text{TF}$ and $\text{LBeh}$ as hinted at above; for more general types of tree transformations, it will be epi. The category $\text{TF}^\dagger$ is included in $\text{TF}$ of tree functions in general: we shall give a categorical characterization of being “bottom-up”. The local behaviors are coKleisli maps of certain comonads, in one case biKleisli maps of a distributive law of a comonad over a monad, and agree with the idea of comonadic notions of computation as those that send “values-in-contexts” to “values” [13,12] (the latter reference deals with attribute grammars, another type of tree computations). The behavior-realization adjunction is presented as $\text{Real} \dashv \text{LBeh}$. In each of the Sects. 2–4, we shall develop a situation like (2) for a specific class of tree transformations—and hence a corresponding class of tree transducers. Namely, relabeling bottom-up tree transducers in Sect. 2; rebranching bottom-up tree transducers in Sect. 3, and bottom-up tree transducers in full generality in Sect. 4. In Sect. 5 we generalize our categorical formulation in an orthogonal direction: we uncover further compositional structures using Hughes’s Arrows [5], and thus a way to view tree transformations as “structured computations” in programming semantics. ## 2 Relabeling Bottom-Up Tree Transducers In this section we will consider a class of tree transducers (TTs) that operate on well-founded trees of a fixed branching type $F$ (a set functor), with labels The A-labeled trees of the branching type $F$ (for brevity, we also say $A$-trees) live in the initial algebra of the functor $A \times F_\rightarrow$, whose carrier we denote by $D_A$. The algebra structure $A \times FDA \rightarrow DA$ will be denoted by $\sigma_A$. Obviously $D_1$ is the set of unlabelled trees or tree-shapes. **Definition 2.1** A (relabeling bottom-up) tree transducer (TT) is a function $A \times FX \rightarrow B \times X$ in Sets, where the set $X$ is called the state space. A morphism of such TTs from $A \times FX \rightarrow B \times X$ to $A \times FY \rightarrow B \times Y$ is a function $f : X \rightarrow Y$ satisfying $(B \times f) \circ c = d \circ (A \times F f)$. TTs and morphisms between them form a category which we denote by $\mathbf{TT}(A, B)$, leaving again the dependence on $F$ implicit. Obviously, $\mathbf{TT}(A, B)$ is nothing but the comma category $(A \times F_\rightarrow, B \times _\rightarrow)$. **Example 2.2** The operation of a TT is best described on an example. As the branching type $F$ we take $1 + (_2^2)$, describing well-founded binary trees. Consider a TT $A \times (1 + X^2) \rightarrow B \times X$ and the leftmost tree below as an input. The bottom-up computation starts at the leaves: let $(a_0, \kappa_1(\ast)) \rightarrow (b_0, x_0)$, where $\kappa_1, \kappa_2$ are coproduct injections. This assigns a label $b_0$ and a state $x_0$ to the corresponding leaf of the output tree. Similar mappings at the other leaves lead to the middle tree. At the inner position of $a_2$, the label on the output tree is determined by the input label $a_2$ as well as by the states $x_0, x_1$ of the successor nodes. They are already available precisely because we proceed in a bottom-up manner. Now we get $(b_2, x_2)$ from the outcome $(a_2, \kappa_2(x_0, x_1)) \rightarrow (b_2, x_2)$. We continue this way and get the tree on the right from $(a_4, \kappa_2(x_2, x_3)) \rightarrow (b_4, x_4)$. By forgetting about the states $x_i$, we finally obtain the output tree of the computation. It is obvious that the shape of the input tree is preserved. This will change in the next section. For a TT $c$, we shall now define two behaviors $TF(c)$ and $LBeh(c)$. The former is a function that carries an $A$-tree to a $B$-tree; the latter carries an $A$-tree to an element in $B$, as hinted at in the introduction. **Definition 2.3** A TT $A \times FX \xrightarrow{\sigma} B \times X$ induces its tree function behavior $TF(c) : DA \rightarrow DB$ and its local behavior $LBeh(c) : DA \rightarrow B$ via the following two diagrams, both using the initiality of $\sigma_A$. \[ \begin{array}{c} A \times F\underset{\sigma_A}{\rightarrow} DA \\ \downarrow \cong \quad \downarrow \cong \\ A \times F(DB \times X) \quad A \times F(DB \times X) \\ \downarrow \cong \quad \downarrow \cong \\ DB \quad DB \\ \end{array} \] where the algebra structure $\tilde{c}$ on the left is given by the composite \[ A \times F(DB \times X) \xrightarrow{(F\pi_1,F\pi_2)} A \times FDB \times FX \xrightarrow{\tilde{c}} B \times FDB \times X \xrightarrow{\sigma_B} DB \times X \] the underlining indicating what the maps act on. The mapping $A \mapsto DA$ carries a comonad structure. It is the cofree recursive comonad on $F$ [14]. A local behavior $LBeh(c) : DA \rightarrow B$ is a morphism $A \rightarrow B$ in the coKleisli category of the comonad $D$. This is a general phenomenon. By a simple diagram chase it can be seen that a morphism of TTs is indeed a “behavior-preserving map” wrt. the above two behaviors. **Lemma 2.4** Assume we have a morphism $f$ from one TT $c$ to another $d$. Then $LBeh(c) = LBeh(d)$ and $TF(c) = TF(d)$. □ In Example 2.2 we have illustrated how a TT acts in a bottom-up fashion on trees. Before we can show that the $TF$ behavior from Def. 2.3 is indeed “bottom-up” we need a characterization of bottom-up tree functions. Intuitively, these are the functions $l : DA \rightarrow DB$ such that: \[ l\left(\begin{array}{c} t_1 \\ \end{array}\right) \quad l\left(\begin{array}{c} t_2 \\ \end{array}\right) \] is of the form \[ \begin{array}{c} l(t_1) \\ \end{array}, \quad \begin{array}{c} l(t_2) \\ \end{array} \] The following definition captures this intuition in categorical terms. Definition 2.5 A tree function \( f : DA \rightarrow DB \) is said to be (relabeling) **bottom-up** if it is a morphism of coalgebras, as in: \[ \begin{array}{c} \begin{array}{c} FDA \\ \uparrow \pi_2 \\ A \times FDA \end{array} \xrightarrow{Fl} \begin{array}{c} FDB \\ \uparrow \pi_2 \\ B \times FDB \end{array} \end{array} \end{equation} (4) Lemma 2.6 For a TT \( A \times FX \xrightarrow{c} B \times X \), the induced tree function \( TF(c) : DA \rightarrow DB \) is bottom-up. □ Now we can define the three semantic domains appearing in (3). We write: - \( LBeh(A, B) \) for the set of maps \( DA \rightarrow B \), i.e., \( LBeh(A, B) = \text{Hom}_C(DA, B) \); - \( TF(A, B) \) for the set of maps \( DA \rightarrow DB \), i.e., \( TF(A, B) = \text{Hom}_C(DA, DB) \); - \( TF|(A, B) \) for the subset of bottom-up maps \( DA \rightarrow DB \). These three sets are considered as discrete categories. This enables us to consider behavior mappings as functors from \( TT(A, B) \), in a degenerate fashion. Lemma 2.7 The mappings \( LBeh \) and \( TF \) in Def. 2.3 extend to functors \[ LBeh : TT(A, B) \rightarrow LBeh(A, B) \quad \text{and} \quad TF : TT(A, B) \rightarrow TF(A, B). \] The functor \( TF \) factors through the embedding \( TF|(A, B) \rightarrow TF(A, B) \). □ A realization functor \( Real : LBeh(A, B) \rightarrow TT(A, B) \) is defined to send a local behavior \( k : DA \rightarrow B \) to the TT \( \langle k, DA \rangle \circ \sigma_A : A \times FDA \rightarrow B \times DA \). This TT has a canonical state space, namely the set \( DA \) of all \( A \)-trees; in all but degenerate cases, this state space is infinite. In fact \( Real \) yields the initial realization and we get a behavior-realization adjunction in the spirit of [3]. Theorem 2.8 We have \( Real \dashv LBeh \), and since the category \( LBeh(A, B) \) is discrete, this adjunction is actually a coreflection. Proof. The statement is equivalent to the following. For a given local behavior \( DA \xrightarrow{k} B \), the realization \( Real(k) \) is the initial one among those which yield \( k \) as their \( LBeh \) behavior. Let \( A \times FX \xrightarrow{c} B \times X \) be one of such TTs. The following fact is shown by diagram chasing. \[ DA \xrightarrow{\sigma_A} X \text{ is a morphism of TTs from } Real(k) \text{ to } c \text{ if and only if } \langle k, f \rangle \text{ is an algebra homomorphism from the initial algebra } \sigma_A \circ (A \times F\pi_2) : A \times F(B \times X) \rightarrow B \times X. \] Initiality of \( \sigma_A \) yields existence and uniqueness of such \( f \), hence the initiality of \( Real(k) \). □ Next we shall establish an isomorphism between the two (local and tree function) behaviors, which we already discussed in the introduction. By Lemma 2.7, Theorems 2.8 and 2.9 we have established the situation (3). **Theorem 2.9** The following composite \( W \) of functors is an isomorphism. \[ W = \left( \text{LBeh}(A, B) \xrightarrow{\text{Real}} \text{TT}(A, B) \xrightarrow{\text{TF}} \text{TF}_{\uparrow}(A, B) \right) \] **Proof.** The functor \( W \) sends a map \( k : DA \rightarrow B \) to its coKleisli extension \( Dk \circ \delta_A : DA \rightarrow DDA \rightarrow DB \). Let \( E : \text{TF}_{\uparrow}(A, B) \rightarrow \text{LBeh}(A, B) \) be the functor carrying a bottom-up tree function \( l : DA \rightarrow DB \) to \( \epsilon_B \circ l : DA \rightarrow DB \rightarrow B \). Thus \( E \) post-composes the tree function with extraction of the root label. Then \( E \circ W = \text{Id} \) because \( D \) is a comonad. For the opposite direction \( W \circ E = \text{Id} \), bottom-upness is crucial. \( \square \) 3 Rebranching Bottom-Up Tree Transducers In this section we pursue the same idea as in the previous section, but for a more general class of bottom-up TTs, namely rebranching TTs. They no longer preserve tree shapes, in fact they take trees of one branching type \( F \) to trees of a possibly different branching type \( G \), by reorganizing the branching of any node of the input tree from type \( F \) to type \( G \). We shall establish the following situation, which is almost the same as (3). The main differences are: 1) the fixed parameters are now functors \( F, G \) for branching types (instead of sets \( A, B \) of labels) meaning that we consider transformations of \( F \)-branching trees (\( F \)-trees for short) into \( G \)-trees; 2) the isomorphism between \( \text{LBeh} \) and \( \text{TF}_{\uparrow} \) is not present. \[ \begin{align*} \text{TT}(F, G) \xrightarrow{\text{TF}} \\ \text{Real} \uparrow LBeh \xrightarrow{\text{TF}_{\uparrow}(F, G)} \text{TF}(F, G) \xrightarrow{W} \text{LBeh}(F, G) \end{align*} \] (5) A novelty in this section is what we call “placeholders-via-naturality”. TTs are conventionally systems of transition rules in which placeholders appear explicitly. In our categorical approach, they have quite a different presentation as natural transformations (Def. 3.1). The correspondence between these seemingly different notions will be described via the Yoneda lemma. Let us first present the conventional notion of rebranching TTs. Let \( \Sigma \) and \( \Delta \) be universal-algebraic signatures: we consider transformations of \( \Sigma \)-trees into \( \Delta \)-trees. Conventionally, a rebranching TT with a state space \( X \) is presented as an element of the set \[ \prod_{f \in \Sigma} X^{[f]} \rightarrow (\prod_{g \in \Delta} |f| \times X) \times X. \] (6) It is illustrative to think of the cardinality \( |f| \) as a set \( \{y_1, \ldots, y_{|f|}\} \) of placeholders, of the set \( X^{[f]} \) on the left as the set of graphs of functions from \( |f| \) to \( X \) and of the set \(|f|\) on the right as the set of length-|g| lists over |f|. For example, assume that some \(f\) is binary and a TT (6) carries \((f, ((y_1, x_1), (y_2, x_2)))\) to \(((g, (y_2, y_1, y_1)), x)\) with a ternary \(g\). This is understood graphically as follows. \[ \begin{array}{c} \text{X} \\ \end{array} \quad f \quad \quad \quad \begin{array}{c} \text{Y} \\ \end{array} \begin{array}{c} \text{Z} \\ \end{array} \] This is “bottom-up” because the state \(x\) is determined by the states \(x_1, x_2\) assigned to its successor nodes. Placeholders \(y_1, y_2\) designate how the subtrees are reorganized in the bottom-up construction of a tree function behavior \(l\). The name rebranching comes from the fact that, on the right hand side of (7), exactly one function symbol occurs, so that a layer in a input tree is sent to exactly one layer of the output tree, and only the branching within the layer changes. In Sect. 4 we will abandon also this requirement. We now present our categorical definition of TTs. **Definition 3.1** A (rebranching bottom-up) TT is a natural transformation \(F(\_ x X) \Rightarrow G(\_ x X)\) between set functors. The set \(X\) is called its state space. A morphism of TTs from \(F(\_ x X) \Rightarrow G(\_ x X)\) to \(F(\_ x Y) \Rightarrow G(\_ x Y)\) is a function \(f : X \rightarrow Y\) satisfying \((G(\_ x f)) \circ \gamma = \delta \circ F(\_ x f)\). We denote by \(\text{TT}(F, G)\) the category of TTs and morphisms. This categorical formulation may seem very different from the conventional one (6). But somewhat remarkably the two agree for functors arising from traditional signatures. Let \(F, G\) be induced by universal-algebraic signatures \(\Sigma, \Delta\): namely, \(F = \prod_{f \in \Sigma} (\_)^{|f|}\) and \(G = \prod_{g \in \Delta} (\_)^{|g|}\). The following calculation shows the equivalence between (6) and Def. 3.1 via the Yoneda lemma. \[ \prod_{f \in \Sigma} (X^{|f|}) \rightarrow (\prod_{g \in \Delta} |f|)^{|g|} \times X \] \[ = \prod_{f \in \Sigma} (G|f| \times X)^{|f|} \] by definition of \(G\) \[ = \prod_{f \in \Sigma} (\_)^{|f|} \Rightarrow (G_\_ \times X)^{|f|} \] by Yoneda \[ = \prod_{f \in \Sigma} (\_ \times X)^{|f|} \Rightarrow G_\_ \times X \] by \(\_ \times X^{|f|} \Rightarrow (\_)^{|f|}\) \[ = (\prod_{f \in \Sigma} (\_ \times X)^{|f|}) \Rightarrow G_\_ \times X \] by definition of \(F\). On the third line the set of placeholders (the first occurrence of \(|f|\) on the second line) is absorbed into naturality, hence “placeholders-via-naturality”. We now proceed to the tree function behavior of our TTs. The tree functions here take $F$-trees to $G$-trees. Going slightly more general than necessary for this section (but preparing for the next), we write $F^*Z$ for the carrier of the initial $(Z + F\_x)$-algebra, i.e., the set of unlabelled $F$-trees with variables (graft-points) from a set $Z$. For the algebra structure $F(F^*Z) \cong F^*Z$ we write $\alpha^F_Z$. $F$-trees simpliciter (i.e., those without variables) arise as the special case $F^*0$. The set (or discrete category) of tree functions $F^*0 \to G^*0$ will be denoted by $TF(F, G)$. **Definition 3.2** A TT $F(\_ x X) \Rightarrow G\_ x X$ induces its tree-function behavior $TF(\gamma) \in TF(F, G)$ by the following algebra initiality diagram. \[ \begin{array}{ccc} F F^*0 & \rightarrow & F(G^*0 \times X) \\ \cong & & \downarrow \cong \\ F^*0 & \rightarrow & G^*0 \times X \\ \cong & & \downarrow \cong \\ TF(\gamma) & \rightarrow & G*0 \times X \\ \end{array} \] Here again, similarly to the situation for relabelings, not all the tree functions $F^*0 \to G^*0$ are induced by a TT but only “bottom-up” ones are. **Definition 3.3** A tree function $F^*0 \xrightarrow{\gamma} G^*0$ is said to be (rebranching) bottom-up, if there exists a natural transformation called a witness $F(\_ x F^*0) \Rightarrow F(\_)$ which makes the following diagram commute. \[ \begin{array}{ccc} GF^*0 & \rightarrow & GG^*0 \\ \cong & & \uparrow \cong \\ F(F^*0 \times F^*0) & \rightarrow & G^*0 \\ \cong & & \uparrow \cong \\ F^*0 & \rightarrow & G^*0 \\ \end{array} \] By $TF_1(F, G)$ we denote the set (discrete category) of tree functions $F^*0 \to G^*0$ which are rebranching bottom-up. We have $TF_1(F, G) \subset TF(F, G)$. Witnesses are not necessarily unique. A simple example is the tree function that sends an unlabelled binary tree to the unlabelled unary tree of its height. **Lemma 3.4** For a TT $F(\_ x X) \Rightarrow G\_ x X$, the induced tree function $TF(\gamma) : F^*0 \to G^*0$ is (rebranching) bottom-up. **Proof.** Take $\omega = \pi_1 \circ \gamma \circ F(\_ \times (\pi_2 \circ \gamma))$, where $\gamma$ is from (8). \qed **Definition 3.5** Given a TT $F(\_ x X) \Rightarrow G\_ x X$, we define its local behavior $LBeh(\gamma)$ to be $F(\_ \times F^*0) \Rightarrow G\_$ from the proof of Lemma 3.4. In Sect. 2 we observed that a local behavior $DA \to B$ is a coKleisli map. This is also the case in this section. In fact, the mapping $F \mapsto F(F^*0 \times \_)$ extends to a comonad on the functor category $\mathbf{[Sets, Sets]}$, so that any natural transformation $F(F^*0 \times \_) \Rightarrow G_\_$ is therefore a coKleisli map from $F$ to $G$. We denote their set (discrete category) by $\text{LBeh}(F,G)$. **Lemma 3.6** The operations $\text{LBeh}$ and $\text{TF}$ in Definitions 3.5 and 3.2 extend to functors $\text{LBeh} : \mathbf{TT}(F,G) \to \text{LBeh}(F,G)$ and $\text{TF} : \mathbf{TT}(F,G) \to \text{TF}_f(F,G)$. **Theorem 3.7** We have an adjunction (actually a coreflection) $\text{Real} \dashv \text{LBeh}$, where the realization functor for local behaviors $\text{Real} : \text{LBeh}(F,G) \to \mathbf{TT}(F,G)$ sends a local behavior $F(\_ \times F^*0) \Rightarrow G_\_$ to a TT with $Z$-components $F(Z \times F^*0) \xrightarrow{\omega_z,F^*}\ GZ \times FF^*0 \xrightarrow{GZ \times \alpha^F} GZ \times F^*0$. **Proposition 3.8** The functor $W = (\text{LBeh}(F,G) \xrightarrow{\text{Real}} \mathbf{TT}(F,G) \xrightarrow{\text{TF}} \text{TF}_f(F,G))$ is an epimorphism. ### 4 Relayering Bottom-Up Tree Transducers In this section we will consider our most general class of bottom-up tree transformations, which can send a layer in an input tree to a truncated subtree in the output tree. For reasons of space, we must be fairly brief. We establish the same situation as in the previous section, except that we do not have to single out any condition of bottom-upness of tree functions. As we do not restrict state spaces to be finite, any tree function can arise as the behavior of a relayering bottom-up TT. A categorical presentation of relayering TTs is obtained much like that of rebranching TTs in Sect. 3, using “placeholders-via-naturality”. We recall the notation $F^*Z$ for the carrier of the initial $(Z + F\_)-$algebra. It is now important for us that the functor $F^*$ carries a monad structure, in particular a multiplication $\mu^F : F^*F^* \Rightarrow F^*$ that can be defined via initiality. **Definition 4.1** A (relayering bottom-up) TT is a natural transformation of the form $F(\_ \times X) \Rightarrow G^*\_ \times X$. Such TTs form a category $\mathbf{TT}(F,G)$ together with an obvious notion of morphism. The difference from Def. 3.1 is that we have $G^*$ instead of $G$ in the codomain. This corresponds to allowing terms over placeholders rather than applications of single function symbols in the right-hand sides of transition rules (7): for example, **Definition 4.2** A TT \( F(\_ \times X) \xrightarrow{\gamma} G^* \times X \) induces its tree-function behavior \( TF(\gamma) : F^* \to G^* \) by the following algebra initiality diagram. \[ \begin{align*} FF^* & \xrightarrow{\alpha_F} F(G^0 \times X) \\ F^* & \xrightarrow{\gamma} G^* \times X \\ F^* & \xrightarrow{\pi_1} G^* \\ \end{align*} \] (11) For relayering TTs any tree function is bottom-up: a tree function \( l : F^* \to G^* \) is realized by the TT whose \( Z \)-component is \[ F(Z \times F^0) \xrightarrow{F \times \beta} F^0 \xrightarrow{\alpha_F} F^* \xrightarrow{(l, F^0)} G^* \times F^0 \xrightarrow{\mu_G \times F^0} G^* Z \times F^0 , \] where \( ! \) denotes the empty map \( 0 \to Z \). This realization however does not give an adjunction. The local behavior induced by a TT \( \gamma \) is a natural transformation \( \text{LBeh}(\gamma) : F(\_ \times F^0) \Rightarrow G^* \). Such natural transformations are biKleisli maps of a distributive law of the comonad \( F \Rightarrow F(\_ \times F^0) \) of the previous section over the free monad delivering monad \( F \Rightarrow F^* \). We denote their set (discrete category) by \( \text{LBeh}(F, G) \). For a realization functor for local behaviors \( \text{Real} : \text{LBeh}(F, G) \to \text{TT}(F, G) \) we obtain an adjunction (actually a coreflection) \( \text{Real} \dashv \text{LBeh} \), similarly to the rebranching case. ## 5 Allowing Parameters to Vary In Sect. 2 we saw the fundamental diagram (3) relating tree transducers, local behaviors and tree functions. In that diagram we kept the alphabets \( A, B \) fixed. In this section we shall identify additional mathematical structure that emerges by allowing the alphabets to vary. For this purpose we utilize the notion of Arrows—as introduced by Hughes [5], but described more abstractly as monoids in categories of bifunctors in [4]—and also Freyd categories (or as fibered spans). Arrows were devised for the purpose of reconciling impure “structured computations” with purely functional computation. Commonly an Arrow \( A(-, +) \) is a bifunctor \( \mathbb{C}^{\text{op}} \times \mathbb{C} \to \text{Sets} \): in this case \( A(A, B) \) is the set of structured computations (of the kind designated by \( A \)) from the type \( A \) to \( B \). Since we want to consider \( \text{TT}(A, B) \) of relabeling transducers as a category of structured computation, we shall use \( \text{Cat}-\text{valued Arrows} \) instead: these are bifunctors \( \mathbb{C}^{\text{op}} \times \mathbb{C} \to \text{Cat} \) with additional structure \( \text{arr} \) and \( \gggg \). The notion of \( \text{Cat}-\text{valued Arrows} \) are in fact the same thing as \( \text{Freyd categories} \) [8] (enriched by \( \text{Cat} \) in \[^3\text{For the sake of brevity, we ignore here the compatibility with products which is usually given by an operation first.}\] a suitable way); this was shown in [7]. Moreover, a $\textbf{Cat}$-valued Arrow—as a bi-functor $\mathbb{C}^{op} \times \mathbb{C} \to \textbf{Cat}$—induces a fibered span via the generalized Grothendieck construction (see, e.g., [6, Ch. 9]). In the remainder of the section we shall parameterize the diagram (3) and obtain the corresponding situation for Arrows. In this case we have $\mathbb{C} = \textbf{Sets}$ as the base category. We do this only for relabelings due to limited space. The bifunctor $\text{TT}(-, +)$ is such that $\text{TT}(A, B)$ is the category of relabelings from $A$-trees to $B$-trees. It sends a morphism $(\alpha, \beta) : (A, B) \to (C, D)$ in $\mathbb{C}^{op} \times \mathbb{C}$—hence $\alpha : C \to A$ and $\beta : B \to D$—to the functor $\text{TT}(A, B) \to \text{TT}(C, D)$ given as follows. On objects: $$(A \times FX \xrightarrow{\alpha} B \times X) \mapsto (C \times FX \xrightarrow{\alpha \times FX} A \times FX \xrightarrow{\beta \times FX} B \times FX \xrightarrow{D \times FX})$$ and on morphisms it is the identity. Interestingly, there is also a monoid structure $\text{TT} \otimes \text{TT} \xrightarrow{\otimes} \text{TT}$ on the bifunctor $\text{TT}$—this makes $\text{TT}$ an Arrow (see [4]). We shall describe it a bit more concretely. For TTs $A \times FX \xrightarrow{c} C \times X$ and $C \times FY \xrightarrow{d} B \times Y$ with matching output/input, their composition $c \gg d$ has $X \times Y$ as its state space: $$A \times F(X \times Y) \xrightarrow{(Fx_1, Fx_2)} A \times FX \times FY \xrightarrow{c} C \times X \times FY \xrightarrow{d} B \times X \times Y.$$ The operation $\text{arr}$ for $\text{TT}$ carries a morphism $A \xrightarrow{f} B$ in $\mathbb{C}$ to a TT with a trivial state space $1$: namely $A \times F1 \xrightarrow{\text{arr}} A \xrightarrow{f} B \xrightarrow{1} B \times 1$. It is easy to check that $\text{arr}$ and $\gg$ satisfy the appropriate naturality and monoid equations. Just like $\text{TT}(-, +)$ carries the structure of an Arrow we can identify similar structure on $\text{LBeh}(-, +), \text{TF}(-, +)$ and $\text{TF}(-, +)$. It then turns out that the diagram (3), but then without the fixed alphabets, also exists in parameterized form, even with preservation of this Arrow structure. For example, the behavior-realization adjunction is now described as an adjunction between Arrows. **Theorem 5.1** We have the following situation in the 2-category $\text{Arrow}$. $$\begin{array}{ccc} \text{TT}(-, +) & \overset{TF}{\longrightarrow} & \text{TF}(-, +) \\ \text{Real} & \overset{\text{LBeh}}{\longrightarrow} & \text{TF}(-, +) \xrightarrow{\text{Real}} \text{TF}(-, +) \\ \text{LBeh}(-, +) & \overset{\text{W}}{\longrightarrow} & \text{TF}(-, +) \\ \end{array}$$ 6 Conclusions and Future Work We have given a categorical account of three classes of bottom-up tree transformations. Notably, we have generalized traditional signatures to functors and replaced traditional descriptions of TTs based on placeholder notation with natural transformations, winning simplicity and clarity. In future work, we will elaborate on our basic picture in a form where, in addition to “extensional” tree functions, we also have “intensional” tree functions, capable of tracking which node in an input tree goes where in the output tree. And we will also include top-down computations, using the theory of containers, as well as bottom-up and top-down computations with look-ahead. Acknowledgement T. Uustalu was partially supported by the Estonian Science Foundation grants No. 5567 and 6940. References
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/35022/35022.pdf?sequence=1", "len_cl100k_base": 8915, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31607, "total-output-tokens": 10689, "length": "2e13", "weborganizer": {"__label__adult": 0.0005650520324707031, "__label__art_design": 0.0008373260498046875, "__label__crime_law": 0.0004773139953613281, "__label__education_jobs": 0.0022296905517578125, "__label__entertainment": 0.00022995471954345703, "__label__fashion_beauty": 0.0003306865692138672, "__label__finance_business": 0.0004558563232421875, "__label__food_dining": 0.0008573532104492188, "__label__games": 0.001251220703125, "__label__hardware": 0.0016927719116210938, "__label__health": 0.0018415451049804688, "__label__history": 0.0005965232849121094, "__label__home_hobbies": 0.00025391578674316406, "__label__industrial": 0.001140594482421875, "__label__literature": 0.0015697479248046875, "__label__politics": 0.0006055831909179688, "__label__religion": 0.0013246536254882812, "__label__science_tech": 0.4306640625, "__label__social_life": 0.0001995563507080078, "__label__software": 0.0063018798828125, "__label__software_dev": 0.54443359375, "__label__sports_fitness": 0.0004346370697021485, "__label__transportation": 0.0015401840209960938, "__label__travel": 0.0002779960632324219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31796, 0.02267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31796, 0.47765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31796, 0.76836]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2652, false], [2652, 5729, null], [5729, 7687, null], [7687, 9997, null], [9997, 12649, null], [12649, 15742, null], [15742, 18292, null], [18292, 20634, null], [20634, 23246, null], [23246, 26156, null], [26156, 29354, null], [29354, 31796, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2652, true], [2652, 5729, null], [5729, 7687, null], [7687, 9997, null], [9997, 12649, null], [12649, 15742, null], [15742, 18292, null], [18292, 20634, null], [20634, 23246, null], [23246, 26156, null], [26156, 29354, null], [29354, 31796, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31796, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31796, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2652, 2], [2652, 5729, 3], [5729, 7687, 4], [7687, 9997, 5], [9997, 12649, 6], [12649, 15742, 7], [15742, 18292, 8], [18292, 20634, 9], [20634, 23246, 10], [23246, 26156, 11], [26156, 29354, 12], [29354, 31796, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31796, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
9b83653241b097b13e6f4261ba2dd537c3a78738
pygrametl: A Powerful Programming Framework for Extract–Transform–Load Programmers Thomsen, Christian; Pedersen, Torben Bach Publication date: 2009 Document Version Publisher's PDF, also known as Version of record Link to publication from Aalborg University Citation for published version (APA): General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. ? Users may download and print one copy of any publication from the public portal for the purpose of private study or research. ? You may not further distribute the material or use it for any profit-making activity or commercial gain ? You may freely distribute the URL identifying the publication in the public portal Take down policy If you believe that this document breaches copyright please contact us at vbn@aub.aau.dk providing details, and we will remove access to the work immediately and investigate your claim. pygrametl: A Powerful Programming Framework for Extract–Transform–Load Programmers Christian Thomsen and Torben Bach Pedersen November, 2009 TR-25 A DB Technical Report pygrametl: A Powerful Programming Framework for Extract–Transform–Load Programmers © ACM, (2009). This is the authors’ version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in Proceeding of the Twelfth International Workshop on Data Warehousing and OLAP (2009) http://doi.acm.org/10.1145/1651291.1651301 Author(s) Christian Thomsen and Torben Bach Pedersen Publication History For additional information, see the DB TECH REPORTS homepage: ⟨www.cs.aau.dk/DBTR⟩. Any software made available via DB TECH REPORTS is provided “as is” and without any express or implied warranties, including, without limitation, the implied warranty of merchantability and fitness for a particular purpose. The DB TECH REPORTS icon is made from two letters in an early version of the Rune alphabet, which was used by the Vikings, among others. Runes have angular shapes and lack horizontal lines because the primary storage medium was wood, although they may also be found on jewelry, tools, and weapons. Runes were perceived as having magic, hidden powers. The first letter in the logo is “Dagaz,” the rune for day or daylight and the phonetic equivalent of “d.” Its meanings include happiness, activity, and satisfaction. The second letter is “Berkano,” which is associated with the birch tree. Its divinatory meanings include health, new beginnings, growth, plenty, and clearance. It is associated with Idun, goddess of Spring, and with fertility. It is the phonetic equivalent of “b.” Abstract Extract–Transform–Load (ETL) processes are used for extracting data, transforming it and loading it into data warehouses (DWs). Many tools for creating ETL processes exist. The dominating tools all use graphical user interfaces (GUIs) where the developer visually defines the data flow and operations. In this paper, we challenge this approach and propose to do ETL programming by writing code. To make the programming easy, we present the (Python-based) framework pygrametl which offers commonly used functionality for ETL development. By using the framework, the developer can efficiently create effective ETL solutions from which the full power of programming can be exploited. Our experiments show that when pygrametl is used, both the development time and running time are short when compared to an existing GUI-based tool. 1 Introduction The Extract–Transform–Load (ETL) process is a crucial part for a data warehouse (DW) project. The task of the ETL process is to extract data from possibly heterogenous source systems, do transformations (e.g., conversions and cleansing of data) and finally load the transformed data into the target DW. It is well-known in the DW community that it is both time-consuming and difficult to get the ETL right due to its high complexity. It is often estimated that up to 80% of the time in a DW project is spent on the ETL. Many commercial and open source tools supporting the ETL developers exist [2, 19]. The leading ETL tools provide graphical user interfaces (GUIs) in which the developers define the flow of data visually. While this is easy to use and easily gives an overview of the ETL process, there are also disadvantages connected with this sort of graphical programming of ETL programs. For some problems, it is difficult to express their solutions with the standard components available in the graphical editor. It is then time consuming to construct a solution that is based on (complicated) combinations of the provided components or integration of custom-coded components into the ETL program. For other problems, it can also be much faster to express the desired operations in some lines of code instead of drawing flows and setting properties in dialog boxes. In a recent article, Stodder names the impact of open source as a “BI megatrend”. It is argued that “[...] to customize data integration middleware through access to the source code is attractive because like it or not, many organizations need to tailor such routines to their requirements. Out-of-the-box routines only go so far.” [18]. We agree with this but, in our opinion, this is also an argument that supports programming of ETL operations. It is also unattractive to integrate such a specialized routine into a GUI and provide the eye-candy like icons, configuration windows, etc. It is more productive only to program the core routine that does the data manipulation. The productivity does not become high just by using a graphical tool. In fact, in personal communication with employees from a Danish company with a revenue larger than one billion US Dollars and hundreds of thousands of customers, we have learned that they gained no change in productivity after switching from hand-coding ETL programs in C to using one of the leading graphical ETL tools. Actually, the company experienced a decrease during the first project with the tool. In later projects, the company only gained the same productivity as when hand-coding the ETL programs. The main benefits were that the graphical ETL program provided standardization and self-documenting ETL specifications such that new team members easily could be integrated. Trained specialists are often using textual interfaces efficiently while non-specialists use GUIs. In an ETL project, non-technical staff members often are involved as advisors, decision makers, etc. but the core development is (in our experience) done by dedicated and skilled ETL developers that are specialists. Therefore it is attractive to consider alternatives to GUI-based ETL programs. In relation to this, one can recall the high expectations to Computer Aided Software Engineering (CASE) systems in the eighties. It was expected that non-programmers could take part in software development by specifying (not programming) characteristics in a CASE system that should generate the code. Needless to say, the expectations were not fulfilled. It might be argued that forcing all ETL development into GUIs is a step back to the CASE idea. We acknowledge that graphical ETL programs are useful in some circumstances but we also claim that for many ETL projects, a (completely or partly) code-based solution is the right choice. However, many parts of such code-based programs are redundant if each ETL program is coded from scratch. To remedy this, a framework with common functionality is needed. In this paper, we present pygrametl (pronounced py-gram-e-t-l) which is a programming framework for ETL programmers. The framework offers functionality for ETL development and while it is easy to get an overview of and to start using, it is still very powerful. pygrametl offers a novel approach to ETL programming by providing a framework which abstracts the access to the underlying DW tables and allows the developer to use the full power of the host programming language. In particular, the use of snowflaked dimensions is easy as the developer only operates on one “dimension object” for the entire snowflake while pygrametl handles the different DW tables in the snowflake. It is also very easy to insert data into dimension and fact tables while only iterating the source data once and to create new (relational or non-relational) data sources. Our experiments show that pygrametl indeed is effective in terms of development time and efficient in terms of performance when compared to a leading open-source GUI-based tool. In previous work [20], we have been involved in building ETL tools for a DW that stores data about web resources and tests of these web resources. Experiences from this work have been used in building pygrametl. In particular, we have made it easy to insert data into dimensions and fact tables while only iterating through the source data once, to insert data into snowflaked dimensions that span several underlying tables, and to add new kinds of (relational or non-relational) data sources. Due to the ease of programming (we elaborate in Section 3) and the rich libraries, we implemented pygrametl as an application library in Python. pygrametl is a framework where the developer makes the ETL program by coding it. pygrametl applies both functional and object-oriented programming to make the ETL development easy and provides often needed functionality. In this sense, pygrametl is related to other special-purpose frameworks where the user does coding but avoids repetitive and trivial parts by means of libraries that provide abstractions. This is, for example, the case for the web frameworks Django [1] and Ruby on Rails [14] where development is done in Python and Ruby code, respectively. Many commercial ETL and data integration tools exist [2]. Among the vendors of the most popular prod- ucts, we find big players like IBM, Informatica, Microsoft, Oracle, and SAP [4, 5, 7, 8, 15]. These are also the vendors named as the market leaders in Gartner’s Magic Quadrant [2]. These vendors and many other provide powerful tools supporting integration between different kinds of sources and targets based on graphical design of the processes. Due to their wide field of functionality, the commercial tools often have steep learning curves and as mentioned above, the user’s productivity does not necessarily get high(er) from using a graphical tool. Many of the commercial tools also have high licensing costs. Open source ETL tools are also available [19]. In most of the open source ETL tools, the developer specifies the ETL process either by means of a GUI or by means of XML. Scriptella [16] is an example of a tool where the ETL process is specified in XML. This XML can, however, contain embedded code written in Java or a scripting language. pygrametl goes further than Scriptella and does not use XML around the code. Further, pygram- etl offers DW-specialized functionality such as direct support for slowly changing dimensions and snowflake schemas, to name a few. The academic community has also been attracted to ETL. A recent paper [22] presents a survey of the research. Most of the academic approaches, e.g., [17, 21], use UML or graphs to model an ETL workflow. In this paper, we challenge the idea that graphical programming of ETL is always easier than text-based programming. Grönniger et al. [3] have previously argued why text-based modeling is better than graphical modeling. Among other things, they point out that writing text is more efficient than drawing models, that it is easier to grasp details from text, and that the creative development can be hampered when definitions must be added to the graphical model. As graphical ETL tools often are model-driven such that the graphical model is turned into the executable code, these concerns are, in our opinion, also related to ETL development. Also, Petre [10] has previously argued against the widespread idea that graphical notation and programming always lead to more accessible and comprehensible results than what is achieved from text-based notation and programming. In her studies [10], she found that text overall was faster to use than graphics. The rest of this paper is structured as follows: Section 2 presents an example of an ETL scenario which is used as a running example in the paper. Section 3 gives an overview of pygrametl. Sections 4–8 present the functionality and classes provided by pygrametl. There are classes that represent different data sources (described in Section 4), dimensions (Section 5), fact tables (Section 6), and steps in an ETL flow (Section 7). Further, there are a number of convenient helper functions (Section 8) that provide often needed functionality. Section 9 evaluates pygrametl. Finally, Section 10 concludes and points to future work. 2 Example Scenario In this section, we describe an ETL scenario which we use as a running example throughout the paper. The example considers a DW where test results for tests of web pages are stored. This is inspired by the work we did Table 1: The source data format for the running example <table> <thead> <tr> <th>Field</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>localfile</td> <td>Name of local file where the page was stored</td> </tr> <tr> <td>url</td> <td>URL from which the page was downloaded</td> </tr> <tr> <td>server</td> <td>HTTP header’s Server field</td> </tr> <tr> <td>size</td> <td>Byte size of the page</td> </tr> <tr> <td>downloaddate</td> <td>When the page was downloaded</td> </tr> <tr> <td>lastmoddate</td> <td>When the page was modified</td> </tr> </tbody> </table> (a) DownloadLog.csv <table> <thead> <tr> <th>Field</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>localfile</td> <td>Name of local file where the page was stored</td> </tr> <tr> <td>test</td> <td>Name of the test that was applied to the page</td> </tr> <tr> <td>errors</td> <td>Number of errors found by the test on the page</td> </tr> </tbody> </table> (b) TestResults.csv in the European Internet Accessibility Observatory (EIAO) project [20] but has been simplified here for the sake of brevity. In the system, there is a web crawler that downloads web pages from different web sites. Each downloaded web page is stored in a local file. The crawler stores data about the downloaded files in a download log which is a tab-separated file. The fields of that file are shown in Table 1(a). When the crawler has downloaded a set of pages, another program performs a number of different tests on the pages. These tests could, e.g., test if the pages are accessible (i.e., usable for disabled people) or conform to certain standards. Each test is applied to all pages and for each page, the test outputs the number of errors detected. The results of the tests are also written to a tab-separated file. The fields of this latter file are shown in Table 1(b). After all tests are performed, the data from the two files is loaded into a DW by an ETL program. The schema of the DW is shown in Figure 1. The DW schema has three dimensions: The test dimension holds information about each of the tests that are applied. This dimension is static and prefilled (and not changed by the ETL). The date dimension holds information about dates and is filled by the ETL on-demand. The page dimension is snowflaked and spans several tables. It holds information about the individual downloaded web pages including both static aspects (the URL and domain) and dynamic aspects (size, server, etc.) that may change between two downloads. The page dimension is also filled on-demand by the ETL. The page dimension is a so-called slowly changing dimension where information about different versions of a given web page is stored. Each dimension has a surrogate key (with a name ending in “id”) and one or more attributes. The individual attributes have self-explanatory names and will not be described in further details here. There is one fact table which has a foreign key to each of the dimensions and a single measure holding the number of errors found for a certain test on a certain page on a certain date. 3 Overview of the Framework The purpose of pygrametl is to make it easy to load data into DWs managed by relational database managements systems (RDBMSs). The trend on the commercial market for ETL is moving towards big suites of integration tools [2] supporting many kinds of targets. Focusing on RDBMSs as the targets for pygrametl keeps the design simple as it allows us to make assumptions and go for the good solutions specialized for this domain instead of thinking in very general “integrations terms”. The data sources do not have to be relational. When using pygrametl, the programmer makes code that controls the flow, the extraction (the E in ETL) from source systems, the transformations (the T in ETL) of the source data, and the load (the L in ETL) of the transformed data. For the flow control, extraction, and load, pygrametl offers components that support the developer and it is easy for the developer to create more of these components. For the transformations, the programmer benefits from having access to the full-fledged host programming language. The loading of data into the target DW is particularly easy with pygrametl. The general idea is that the programmer creates objects for each fact table and dimension (different kinds are directly supported) in the DW. An object representing a dimension offers convenient methods like insert, lookup, etc. that hide all details of caching, key assignment, SQL insertion, etc. In particular it should be noted that a snowflaked dimension also is treated in this way such that a single object can represent the entire dimension but the data is inserted into several tables in the underlying database. The dimension object’s methods take *rows* as arguments. A row in *pygrametl* is simply a mapping from names to values. Based on our personal experiences with other tools, we found it important that *pygrametl* does not try to validate that all data rows given to a dimension object have the same attributes or the same attribute types. If the programmer wants such checks, (s)he should make code for that. It is then, e.g., possible for the programmer to leave an attribute that was used as temporary value holder in a row or on purpose to leave out certain attributes. Only when attributes needed for *pygrametl*’s operations are missing, *pygrametl* complains. Attribute values that should be inserted into the target DW must exist when the insertion is done as *pygrametl* does not try to guess missing values. However, *pygrametl* has functionality for setting default values and/or on-demand call- back of user-defined functions that provide the missing values. Some other existing tools are strict about enforcing uniformity of rows. In *pygrametl*, it should be easy for the programmer to do what (s)he wants – not what the tool *thinks* (s)he wants. *pygrametl* is implemented as a module in Python [13]. Many other programming languages could obviously have been used. We chose Python due to its design to support programmer productivity and its comprehensive standard libraries. Further, Python is both dynamically typed (the programmer does not have to declare the type a variable takes) and strongly typed (if a variable holds an integer, the programmer cannot treat it like a string). Consider, for example, this function: ```python def getfloat(value, default=None): try: return float(value) except Exception: return default ``` This function converts its input to a float or – if the conversion fails – to another value which defaults to None, Python’s null value. Note that no types are specified for the input variables in the function declaration. It is possible to call the function with different types as in the following: ```python f1 = getfloat(10) f2 = getfloat('1e1') f3 = getfloat('A string', 10.0) f4 = getfloat(['A', 'list'], 'Not a float!') ``` After this, f1, f2, and f3 all equal 10.0 while f4 holds the string ‘Not a float!’ The expression f1 + f2 will thus succeed, while f3 + f4 will fail since a float and a string cannot be added. Python is object-oriented but to some degree it also supports functional programming, e.g., such that functions or lambda expressions can be used as arguments. This makes it very easy to customize behavior. *pygrametl*, for example, exploits this to support calculation of missing values on-demand (see Section 5). As Python also supports default arguments, pygrametl provides reasonable defaults for most arguments to spare the developer for unnecessary typing. ### 4 Data Source Support In this and the following sections, we describe the functionality provided by pygrametl. As explained in Section 3, data is moved around in rows in pygrametl. Instead of implementing our own row class, we use Python’s built-in dictionaries that provide efficient mappings between keys (i.e., attribute names) and values (i.e., attribute values). The data sources in pygrametl pass data on in such dictionaries. Apart from that, the only requirement to a data source is that it is iterable (i.e., its class must define the \_iter\_ method) such that code as the following is possible: for row in datasrc:.... Thus, it does not require a lot of programming to create new sources (apart from the code that does the real extraction which might be simple or not depending on the source format). For typical use, pygrametl provides a few, basic data sources described below. **SQLSource** is a data source returning the rows of an SQL query. The query, the database connection to use and optionally new names for the result columns and “initializing” SQL are given when the data source is initialized. **CSVSource** is a data source returning the lines of a delimiter separated file turned into dictionaries. This class is in fact just implemented in pygrametl as a reference to the class csv.DictReader in Python’s standard library. Consider again the running example. There we have two tab-separated files and one instance of CSVSource should be created for each of them to load the data. For TestResults.csv, this is done as in ```python testresults = CSVSource(file='TestResults.csv', 'r', delimiter='\t') ``` Again, we emphasize the flexibility of using a language like Python for the pygrametl framework. Much more configuration can be done during the instantiation than what is shown but default values are used in this example. The input could also easily be changed to come from another source than a file, e.g., a web resource or a string in memory. **MergeJoiningSource** is a data source that equijoins rows from two other data sources. It is given two data sources (which must deliver rows in sorted order) and information about which attributes to join on. It then merges the rows from the two sources and outputs the combination of the rows. In the running example, we consider data originating from two data sources. Both the data sources have the field localfile and this is how we relate information from the two files: ```python inputdata = MergeJoiningSource(testresults, 'localfile', downloadlog, 'localfile')) ``` where testresults and downloadlog are CSVSources. **HashJoiningSource** is also a data source that equijoins rows from two other data sources. It does this by using a hash map. Thus, the input data sources do not have to be sorted. ### 5 Dimension Support In this section, we describe the classes representing dimensions in the DW to load. This is the area where the flexibility and easy use of pygrametl are most apparent. Figure 2 shows the class hierarchy for the dimension supporting classes. Methods only used internally in the classes and attributes are not shown. Only required arguments are shown, not those that take default values when not given. Note that SnowflakedDimension actually does not inherit from Dimension but offers the same interface and can be used as if it were a Dimension due to Python’s dynamic typing. #### 5.1 Basic Dimension Support **Dimension** is the most basic class for representing a DW dimension in pygrametl. It is used for a dimension that has exactly one table in the DW. When an instance is created, the name of the represented dimension (i.e., the name of the table in DW), the name of the key column\(^1\), and a list of attributes (the underlying table may have more attributes but pygrametl will not use them) must be given. Further, a number of optional settings can be given as described in the following. A list of lookup attributes can be given. These attributes are used when looking up the key value. Consider again the running example. The test dimension has the surrogate key testid but when data is inserted from the CSV files, the test in question is identified from its name (testname). The ETL application then needs to find the value of the surrogate key based on the test name. That means that the attribute testname is a lookup attribute. If no lookup attributes are given by the user, the full list of attributes (apart from the key) is used. When the dimension object is given a row to insert into the underlying DW table (explained below), the row does not need to have a value for the dimension’s key. If the key is not present in the row, a method (called idfinder) is called with the row as an argument. Thus, when creating a Dimension instance, the idfinder method can also be set. If not set explicitly, it defaults to a method that assumes that the key is numeric and returns the current maximum value for the key incremented by one. A default key value for unfound dimension members can also be set. If a lookup does not succeed, this default key value is returned. This is used if new members should not be inserted into the dimension but facts still should be recorded. By using a default key value, the facts would then reference a preloaded member representing that information is missing. In the running example, test is a preloaded dimension member with the value “Unknown test” for the testname attribute. This can be done as in the following code. ```python testdim = Dimension(name='test', key='testid', defaultidvalue=-1, attributes=['testname', 'testauthor'], lookupatts=['testname']) ``` Finally, it is possible for the developer to assign a function to the argument rowexpander. With such a function, it is in certain situations (explained below) possible to add required fields on-demand to a row before it is inserted into the dimension. Many of the methods defined in the Dimension class accept an optional name mapping when called. This name mapping is used to map between attribute names in the rows (i.e., dictionaries) used in pygrametl and names in the tables in the DW. Consider again the running example where rows from the source file TestResults.csv have the attribute test but the corresponding attribute in the DW’s dimension table is called testname. When the Dimension instance for test in pygrametl is given a row \(r\) to insert into the DW, it will look for the value of the testname attribute in \(r\). However, this value does not exist since it is called test in \(r\). A name mapping \(n = \{\text{‘testname’ : ‘test’}\}\) can then be set up such that when pygrametl code looks for the attribute testname in \(r\), test is actually used instead. Examples showing the use of the methods are given in Section 9 Dimension offers the method lookup which based on the lookup attributes for the dimension returns the key for the dimension member. As arguments it takes a row (which at least must contain the lookup attributes) and optionally a name mapping. Dimension also offers the method getbykey. This method is the opposite \(^1\)We assume that a dimension has a non-composite key. of lookup: As argument it takes a key value and it returns a row with all attributes for the dimension member with the given key value. Another method for looking up dimension members is offered by `Dimension.get-byvals` method. This method takes a row holding a subset of the dimension’s attributes and optionally a name mapping. Based on the subset of attributes, it finds the dimension members that have equal values for the subset of attributes and returns those (full) rows. For adding a new member to a dimension, `Dimension` offers the method `insert`. This method takes a row and optionally a name mapping as arguments. The row is added to the DW’s dimension table. All attributes of the dimension must be present in the `pygrametl` row. The only exception to this is the key. If the key is missing, the `idfinder` method is applied to find the key value. The method `update` takes a row which must contain the key and one or more of the other attributes. The member with the given key value is updated to have the same values as the given attributes. `Dimension` also offers a combination of lookup and insert: `ensure`. This method first tries to use `lookup` to find the key value for a member. If the member does not exist and no default key value has been set, `ensure` proceeds to use `insert` to create the member. In any case, `ensure` returns the key value of the member to the caller. If the `rowexpander` has been set (as described above), that function is called by `ensure` before `insert` is called. This makes it possible to add calculated fields before an insertion to the DW’s dimension table is done. In the running example, the date dimension has several fields that can be calculated from the full date string (which is the only date information in the source data). However, it is expensive to do the calculations repeatedly for the same date. By setting `rowexpander` to a function that calculates them from the date string, the dependent fields are only calculated the first time `ensure` is invoked for certain date. Compared to SQL Server Integration Services (SSIS) – a dominating ETL tool on the market – the functionality of `ensure` offers the programmer more flexibility. When using SSIS, the programmer should typically fill the dimension tables before fact tables are filled. The Lookup transformation cache in SSIS is filled before the data flow execution and is hard to add new members on the fly such that they are available for later lookups. Workarounds are possible (e.g., use of stored procedures and hash tables for new members as in [23]) but they are complex and hard to maintain in comparison to the simple approach taken by `pygrametl` where the addition of new dimension members can be interleaved with the addition of facts. In our previous work in the EIAO project [20], it was only possible to run through the source data once due to time constraints and the used (RDF) data source and we had to add new dimension members when their first fact occurred. Such a strategy is very easy to implement with `pygrametl`. `CachedDimension` has the same public interface as `Dimension` and the same semantics. However, it internally uses memory caching of dimension members to speed up lookup operations. The caching can be complete such that the entire dimension is held in memory or partial such that only the most recently used members are held in memory. A `CachedDimension` can also cache new members as they are being added. As noted above, addition of new dimension members to a cache is complex when using SSIS. When an instance of `CachedDimension` is created, it is possible to set the same settings as for `Dimension`. Further, optional settings can decide the size of the cache, whether the cache should be prefilled with rows from the DW or be filled on-the-fly as rows are used, whether full rows should be cached or only keys and lookup attributes, and finally whether newly inserted rows should be put in the cache. In the running example, a `CachedDimension` for the test dimension can be made as in the following code. ```python testdim = CachedDimension(name='test', key='testid', defaultidvalue=-1, attributes=['testname', 'testauthor'], lookupatts=['testname'], cachefullrows=True, prefilled=True, cachesize=500, prefetch=True) ``` ### 5.2 Advanced Dimension Support `SlowlyChangingDimension` provides support for type 1 and 2 changes in slowly changing dimensions [6]. When an instance of `SlowlyChangingDimension` is created, it can be configured in the same way as a `Dimension` instance. Further, the name of the attribute that holds versioning information for type 2 changes in the DW’s dimension table should be set. A number of other things can optionally be configured. It is possible to set which attribute holds the “from date” telling from when the dimension member is valid. Likewise it is possible to set which attribute holds the “to date” telling when a member becomes replaced. A default value for the “to date” for a new member can also be set. Further, functions that, based on the data in the rows, calculate these dates can be given but if they are not set, pygrametl defaults to use a function that returns the current date. pygrametl offers some convenient functions for this functionality. It is possible not to set any of these date related attributes such that no validity date information is stored for the different versions. It is also possible to list a number of attributes that should have type 1 changes (overwrites) applied. SlowlyChangingDimension has built-in cache support and its details can be configured. SlowlyChangingDimension offers the same functions as Dimension (which it inherits) and the se- manics of the functions are basically unchanged. lookup is, however, modified to return the key value for the newest version. To handle versioning, SlowlyChangingDimension offers the method scdensure. This method is given a row (and optionally a name mapping). It is similar to ensure in the sense that it first sees if the member is present in the dimension and, if not, inserts it. However, it does not only do a lookup. It also detects if any changes have occurred. If changes have occurred for attributes where type 1 changes should be used, it updates the existing versions of the member. If changes have also occurred for other attributes, it creates a new version of the member and adds the new version to the dimension. As opposed to the previously described methods, scdensure has side-effects on its given row: It sets the key and versioning values in its given row such that the programmer does not have to query for this information afterwards. When a page is downloaded in the running example, it might have been updated compared to last time it was downloaded. To be able to record this history, we let the page dimension be a slowly changing dimension. We add a new version when the page has been changed and reuse the previous version when the page is unchanged. We lookup the page by means of the URL and detect changes by considering the other attributes. We create the SlowlyChangingDimension object as in the following. ```python pagedim = SlowlyChangingDimension(name='page', keys='pageid', attributes=[['url', 'size', 'validfrom', 'validto', 'version', 'domainid', 'serverversionid'], lookupatts=['url'], fromfinder=pygrametl.datereader('lastmoddate'), toatt='validto', versionatt='version') ``` In the shown code, the fromfinder argument is a method that extracts a “from date” from the source data when creating a new member. It is also possible to give a tofinder argument to find the “to date” for a version to be replaced. If not given, this defaults to the fromfinder. If another approach is wished (e.g., such that the to date is set to the day before the new member’s from date), tofinder can be set to a function which performs the necessary calculations. SnowFlakedDimension supports filling a dimension in a snowflake schema [6]. A snowflaked dimension is spread over more tables such that there is one table for each level in the dimension hierarchy. The fact table references one of these tables that itself references tables that may reference other tables etc. A dimension member is thus not only represented in a single table as each table in the snowflaked dimension represents a part of the member. The complete member is found by joining the tables in the snowflake. Normallly, it can be a tedious task to create ETL logic for filling a snowflaked dimension. First, a lookup can be made on the root table which is the table referenced by the fact table. If the member is represented there, it is also represented in the dimension tables further away from the fact table (otherwise the root table could not reference these and thus not represent the member at the lowest level). If the member is not represented in the root table, it must be inserted but it is then necessary to make sure that the member is represented in the next level of tables such that the key values can be used in references. This process continues for all the levels until the leaves. While this is not difficult as such, it takes a lot of tedious coding and makes the risk of errors bigger. This is remedied with pygrametl’s SnowflakedDimension which takes care of the repeated ensures such data is inserted where needed in the snowflaked dimension but such that the developer only has to make one method call to add/find the member. An instance of SnowflakedDimension is constructed from other Dimension instances. The program- mer creates a Dimension instance for each table participating in the snowflaked dimension and passes those --- 2It is also possible to do the lookups and insertions from the leaves towards the root but when going towards the leaves, it is possible to stop the search earlier if a part of the member is already present. instances when creating the SnowflakedDimension instance. In the running example, the page dimension is snowflaked. We can create a SnowflakedDimension instance for the page dimension as shown in the following code (where different Dimension instances are created before). ```python pagesf = SnowflakedDimension([ (pagedim, [serversiondim, domindim]), (serversiondim, serverdim), (domindim, topleveldim) ]) ``` The argument is a list of pairs where a pair shows that its first element references each of the dimensions in the second element (the second element may be a list). For example, it can be seen that pagedim references serversiondim and domindim. We require that if t’s key is named k, then an attribute referencing t from another table must also be named k. This requirement could be removed but it makes the specification of relationships between tables much easier. We also require that the tables in a snowflaked dimension form a tree (where the table closest to the fact table is the root) when we consider tables as nodes and foreign keys as edges. Again, we could avoid this requirement but this would complicate the ETL developer’s specifications and the requirement does not limit the developer. If the snowflake does not form a tree, the developer can make SnowflakedDimension consider a subgraph that is a tree and use the individual Dimension instances to handle the parts not handled by the SnowflakedDimension. Consider, for example, a snowflaked date dimension with the levels day, week, month, and year. A day belongs both to a certain week and a certain month but the week and the month may belong to different years (a week has a week number between 1 and 53 which belongs to a year). In this case, the developer could ignore the edge between week and year when creating the SnowflakedDimension and instead use a single method call to ensure that the week’s year is represented: ```python # Represent the week's year. Read the year from weekyear row['weekyearid'] = yeardim.ensure(row, {'year': 'weekyear'}) # Now let SnowflakedDimension take care of the rest row['dateid'] = datesnowflake.ensure(row) ``` SnowflakedDimension’s lookup method calls the lookup method on the Dimension object for the root of the tree of tables. It is assumed that the lookup attributes belong to the table that is closest to the fact table. If this is not the case, the programmer can use lookup or ensure on a Dimension further away from the root and use the returned key value(s) as lookup attributes for the SnowflakedDimension. The method getbykey takes an optional argument that decides if the full dimension member should be returned (i.e., a join between the tables of the snowflaked dimension is done) or only the part from the root. This also holds for getbyvals, ensure and insert work on the entire snowflaked dimension starting from the root and moving outwards as much as needed. The two latter methods actually use the same code. The only difference is that insert, to be consistent with the other classes, raises an exception if nothing is inserted (i.e., if all parts were already there). Algorithm 1 shows how the code conceptually works but we do not show details like use of name mappings and how to keep track of if an insertion did happen. The algorithm is recursive and both ensure and insert first invoke it with the table dimension set to the table closest to the fact table. On line 1, a normal <table> <thead> <tr> <th>Algorithm 1 ensure_helper(dimension, row)</th> </tr> </thead> <tbody> <tr> <td>1: keyval ← dimension.lookup(row)</td> </tr> <tr> <td>2: if found then</td> </tr> <tr> <td>3: row[dimension.key] ← keyval</td> </tr> <tr> <td>4: return keyval</td> </tr> <tr> <td>5: for each table t that is referenced by dimension do</td> </tr> <tr> <td>6: keyval ← ensure_helper(t, row)</td> </tr> <tr> <td>7: if dimension uses the key of a referenced table as a lookup attribute then</td> </tr> <tr> <td>8: keyval ← dimension.lookup(row)</td> </tr> <tr> <td>9: if not found then</td> </tr> <tr> <td>10: keyval ← dimension.insert(row)</td> </tr> <tr> <td>11: else</td> </tr> <tr> <td>12: keyval ← dimension.insert(row)</td> </tr> <tr> <td>13: row[dimension.key] ← keyval</td> </tr> <tr> <td>14: return keyval</td> </tr> </tbody> </table> lookup is performed on the table. If the key value is found, it is set in the row and returned (lines 2–4). If not, the algorithm is applied recursively on each of the tables that are referenced from the current table (lines 5–6). As side-effects of the recursive calls, key values are set for all referenced tables (line 3). If the key of one of the referenced tables is used as a lookup attribute for dimension, it might just have had its value changed in one of the recursive calls and a new attempt is made to look up the key in dimension (lines 7–8). If this attempt fails, we insert (part of) row into dimension (line 10). We can proceed directly to this insertion if no key of a referenced table is used as a lookup attribute in dimension (lines 11–12). SnowflakedDimension also offers an scdensure method. This method can be used when the root is a SlowlyChangingDimension. In the running example, we previously created pagedim as an instance of SlowlyChangingDimension. When pagedim is used as the root as in the definition of pagesf above, we can use the slowly changing dimension support on a snowflake. With a single call of scdensure, a full dimension member can be added such that the relevant parts are added to the five different tables in the page dimension. As previously mentioned, it is difficult to add dimension members and facts interleaved when using SSIS. The complexity of the workarounds to do this gets even higher when they must be applied to more tables as in snowflakes. In contrast it is very easy when using pygrametl. Also when using the open source graphical ETL tool Pentaho Data Integration (PDI), use of snowflakes requires the developer to use several lookup/update steps. It is then not possible to start looking up/inserting from the root as foreign key values might be missing. Instead, the developer has to start from the leaves and go towards the root. In pygrametl, the developer only has to use the SnowflakedDimension instance once. The pygrametl code considers the root first (and may save lookups) and only if needed moves on to the other levels. 6 Fact Table Support pygrametl also offers three classes to represent fact tables. In this section, we describe these classes. It is assumed that a fact table has a number of key attributes and that each of these is referencing a dimension table. Further, the fact tables may have a number of measure attributes. FactTable provides a basic representation of a fact table. When an instance is created, the programmer gives information about the name fact table, names of key attributes and optionally names of measure attributes. Note that the methods defined for FactTables also support optional name mappings. FactTable offers the method insert which takes a row and inserts a fact into the DW’s table. This is obviously the most used functionality. It also offers a method lookup which takes a row that holds values for the key attributes and returns a row with values for both key and measure attributes. Finally, it offers a method ensure which first tries to use lookup. If a match is found on the key values, it compares the measure values between the fact in the DW and the given row. It raises an error if there are differences. If no match is found, it invokes insert. All the methods support name mappings. BatchFactTable inherits FactTable and has the same methods. However, it does not insert rows immediately when insert is called but waits until a user-configurable number of rows are available. This can lead to a high performance improvement. BulkFactTable provides a write-optimized representation of a fact table. It does offer the insert method but not lookup or ensure. When insert is called, the data is not inserted directly into the DW but instead written to a file. When a user-configurable number of rows have been added to the file (and at the end of the load), the content of the file is bulkloaded into the fact table. The exact way to bulkload varies from DBMS to DBMS. Therefore, we again rely on Python’s functional programming support and require the developer to pass a function when creating an instance of BulkFactTable. This function is invoked by pygrametl when the bulkload should take place. When using the database driver psycopg2 [12] and the DBMS PostgreSQL [11], the function can be defined as below. Further, the developer can optionally define which separator and line-ending BulkFactTable uses and which file the data is written to before the bulkload. A string value used to represent NULLs can also be defined. For the running example, a BulkFactTable instance can be created for the fact table as shown below. ```python def pgbulkloader(name, attributes, fieldsep, rowsep, nullval, filehandle): global connection # Opened outside this method cursor = connection.cursor() cursor.copy_from(file=filehandle, table=name, sep=fieldsep, null=nullval, columns=attributes) ``` ### 7 Flow Support To make it possible to create components with encapsulated functionality and easily connect such components, `pygrametl` offers support for *steps* and flow of data between them. The developer can, for example, create a step for extracting data, a step for cleansing, a step for logging, and a step for insertion into the DW’s tables. Each of the steps can be coded individually and finally the data flow between them can be defined. This borrows one of the good aspects from GUI-based ETL tools, namely that it is easy to keep different aspects separated and thus to get an overview of what happens in a sub-task. In `pygrametl`, we combine the best of both worlds: The developer can benefit from the expressiveness and power of coding but also use encapsulation and the built-in flow support. **Step** is the basic class for flow support. It can be used directly or as a base class for other step-supporting classes. The programmer can for a given Step set a worker function which is applied on each row passing through the Step. If not set by the programmer, the function `defaultworker` (which does not do anything) is used. Thus, `defaultworker` is the function inheriting classes override. The programmer can also determine to which rows by default should be sent after the current. That means that when the worker function finishes its work, the row is passed on to the next Step unless the programmer specifies otherwise. So if no default Step is set or if the programmer wants to send the row to a non-default Step (e.g., for error handling), there is the function `_redirect` which the programmer can use to explicitly direct the row to a specific Step. There is also a method `_injected` for injecting a new row into the flow before the current row is passed on. The new row can be injected without an explicit target in which case the new row is passed on the Step that rows by default are sent to. The new row can also be injected and sent to a specified target. This gives the programmer a large degree of flexibility. The worker function can have side-effects on the rows it is given. This is, for example, used in the class `DimensionStep` which calls `ensure` on a certain Dimension instance for each row it sees and adds the returned key to the row. Another example is `MappingStep` which applies functions to attributes in each row. A typical use is to set up a MappingStep applying `pygrametl`’s type conversion functions to each row. A similar class is `ValueMappingStep` which performs mappings from one value set to another. Thus, it is easy to perform a mapping from, e.g., country codes like 'DK' and 'DE' to country names like 'Denmark' and 'Germany'. To enable conditional flow control, the class `ConditionalStep` is provided. A `ConditionalStep` is given a condition (which is a function or a lambda expression). For each row, the condition is applied to the row and if the condition evaluates to a True value, the row is passed on to the next default Step. In addition, another Step can optionally be given and if the condition then evaluates to a False value for a given row, the row is passed on to that Step. Otherwise, the row is silently discarded. We emphasize how easy this is to use. The programmer only has to pass on a lambda expression or function. Also, to define new step functionality is very easy. The programmer just writes a single function that accepts a row as input and gives this function as an argument when creating a Step. Steps can also be used for doing aggregations. The base class for aggregating steps, `AggregatingStep`, inherits Step. Like an ordinary Step, it has a defaultworker. This method is called for each row given to the AggregatingStep and must maintain the necessary data for the computation of the average. Further, there is a method `defaultfinalizer` that is given a row and writes the result of the aggregation to the row. The functionality described above could also be implemented by the developer without **Steps**. However, if the developer prefers to think in terms of connected steps (as typically done in GUI-based ETL programs), (s)he can create specialized components with encapsulated functionality by using the **Step** classes. It is even possible to create a GUI from which the **pygrametl** **Steps** can be placed and connected visually while still allowing the programmer to take full advantage of Python programming. ### 8 Further Functionality Apart from the classes described in the previous sections, **pygrametl** also offers some convenient methods often needed for ETL. These include functions that operate on rows (copy, rename, project, set default values) and functions that convert types, but return a user-configurable default value if the conversion cannot be done (like **getfloat** shown in Section 3). In particular for use with **SlowlyChangingDimension** and its support for time stamps on versions, **pygrametl** provides a number of functions for parsing strings to create date and time objects. Some of these functions apply functional programming such that they dynamically create new functions based on their arguments. In this way specialized functions for extracting time information can be created. For an example, refer to **pagedim** we defined in Section 5. There we set **fromfinder** to a (dynamically generated) function that reads the attribute lastmoddate from each row and transforms the read text into a date object. While this set of provided **pygrametl** functions is relatively small, it is important to remember that with a framework like **pygrametl**, the programmer also has access to the full standard library of the host language (in this case Python). Further, it is easy for the programmer to build up private libraries with the most used functionality. ### 9 Evaluation To evaluate **pygrametl**, we implemented an ETL program for the running example. To get data, we made a data generator (details are given below). We also implemented an ETL solution in the graphical ETL tool Pentaho Data Integration (PDI) [9] to be able to compare the development efforts when using visual programming and code-based programming, respectively. PDI is a leading open-source ETL tool. It is Java-based and works with many different data sources and targets. Ideally, the comparison should have included commercial ETL tools but the license agreements of these tools (at least the ones we have read) explicitly forbid publishing of any evaluation/performance results without the consent of the provider. In this section, we present the findings of the evaluation. Note that **pygrametl**, the described ETL program, and the data generator are publicly available from [http://pygrametl.org](http://pygrametl.org). #### 9.1 Development Time We have previous experience with PDI but we obviously know the details of **pygrametl** very well. Therefore, it is hard to make a comprehensive comparison of the development times without having trained development teams at our disposal. We used each tool twice to create identical solutions. In the first use, we worked slower as we also had to find a strategy. In the latter use, we found the “interaction time” spent on typing and clicking. The **pygrametl**-based ETL program was very easy to develop. It took a little less than one hour to code the complete ETL program in the first use. In the second use, it took 24 minutes. The program consists of 142 lines including comments and plenty of whitespace (for example, there is only one argument on each line when the **Dimension** objects are created). The program only has 56 Python statements. This strongly supports that it is easy to develop ETL programs using **pygrametl**. The main method of the developed ETL is shown below. def main(): for row in inputdata: extractdomaininfo(row) extractserverinfo(row) row['size'] = pygrametl.getint(row['size']) # Convert from string to int # Add the data to the dimension and fact tables row['pageid'] = pagesf.scdensure(row) row['dateid'] = datedim.ensure(row, {'date': 'downloaddate'}) row['testid'] = testdim.lookup(row, {'testname': 'test'}) facttbl.insert(row) connection.commit() The methods `extractdomaininfo` and `extractserverinfo` have four lines of code that extract the domain, top-level domain, and server name from the URL and serverVersion attributes. Note that the page dimension is a slowly changing dimension so we use `scdensure` to add new (versions of) members. This is a very easy way to fill a both snowflaked and slowly changing dimension. To fill the date dimension correctly, we have set a `rowexpander` for the `datedim` object to a function that (on demand) calculates the attribute values for a member to insert into the dimension. Thus, it is enough to use `ensure` to find or insert a member. The test dimension is preloaded and we only do lookups. In comparison, the first PDI-based solution took us a little more than two hours to make work. In the second use of PDI, it took 28 minutes to create the solution. The solution has 19 boxes and 19 arrows between them. The flow is shown in Figure 3. ![Figure 3: Data flow in PDI-based solution.](image) We emulate the `rowexpander` feature of `pygrametl` by first looking up a date and calculating the remaining date attributes in case there is no match. Note how we must fill the page snowflake from the leaves towards the root as discussed in Section 5.2. Comparing the number of statements (56) in the `pygrametl`-based solution to the number of boxes (19) in the PDI-based solution, it can be seen that one box in PDI corresponds to three Python statements when using `pygrametl`. This is a very promising result supporting that `pygrametl` is efficient to use (remember that a box is not just a box – rich dialogs must be used to configure its behaviour). From our experiments where we only measured the time spent on using keyboard and mouse, we found that the graphical tool was not faster to use than it was to program by code in a text editor. In fact, the typing of pure text was slightly faster. In the experiment, `pygrametl` was faster to use than PDI in both uses. The first solution was much faster to create in `pygrametl` and we find that the strategy is far simpler to work out in `pygrametl` (compare the shown main method and Figure 3). 9.2 Performance To test the performance of the solutions, we generated data. The generator was configured to create results for 2,000 different domains each having 100 pages. Five tests were applied to each page. Thus, data for one month gave 1 million facts. To test the SCD support, a page could remain unchanged between two months with probability 0.5. For the first month, there were thus 200,000 page versions and for each following month, there were \( \sim 100,000 \) new page versions. We did the tests for 5, 10, 50, and 100 months, i.e., on data sets of realistic sizes. The solutions were tested on a single\(^3\), powerful server with two quad-core 1.86GHz Xeon CPUs, 16GB of RAM, \(^3\)We did not test PDI’s support for distributed execution. and 10,000 rpm harddisks. The server ran SUSE Linux Enterprise Edition 10, Python 2.6, Sun Java 6SE, PDI 3.2-RC1, and PostgreSQL 8.3.5. We tested the tools on a DW where the primary key constraints were declared but the foreign key constraints were not. The DW had an index on page(url, version). We loaded the data into DWs that already held data for 100 months. PDI was tested in two modes. One with a single connection to the DW such that the ETL is transactionally safe and one which uses a special component for bulkloading the facts into PostgreSQL. This special component makes its own connection to the DW. This makes the load faster but transactionally unsafe as a crash can leave the DW loaded with inconsistent data. The pygrametl-based solution uses bulkloading (BulkFactTable) but is always running in safe transactional mode with a single connection to the DW. The solutions were configured to use caches without size limits. When PDI was tested, the max. Java heap size was set to 8GB (12GB for the largest data set as 8GB was too little for this). Figure 4(a) shows the elapsed wall-clock time for the loads and Figure 4(b) shows the spent CPU time. It can be seen that the elapsed time grows super-linearly for PDI while it grows linearly for pygrametl. PDI does not scale linearly since swapping occurs for the big data set due to the high memory consumption (note that the spent CPU time does scale linearly for PDI). PDI is the fastest when it is allowed to use two connections but this advantage fades for large data sets. In safe mode with a single transaction, the pygrametl-based solution is the fastest for all the data sets. When loading 100 million facts, the pygrametl-based solution handles 5081 facts/sec. PDI with a single connection handles 2796 and PDI with two connections handles 4065 facts/sec. Servers may have many CPUs/cores but it is still desirable if the ETL uses little CPU time. More CPU time is then available for other purposes like processing queries. This is in particular relevant if the ETL is running on a virtualized server with contention for the CPUs. From Figure 4(b), it can be seen that pygrametl uses much less CPU time than PDI. When loading the data set with 50 million facts, pygrametl’s CPU utilization is 72%. PDI’s CPU utilization is 177% with one connection and 191% with two. It is clearly seen that it in terms of resource consumption is beneficial to code a specialized light-weight program instead of using a general feature-rich but heavy-weight ETL application. With a real-life, confidential data set (with around 8 millions facts in three fact tables and 8 dimensions of which one is a snowflake with two hierarchies and six participating tables), we have experienced that PDI 3.0 on the same server as before, loads the data set in around 240 minutes while a pygrametl-based solution uses around 100 minutes. The resource consumption of the pygrametl-based solution is also much smaller than PDI’s. pygrametl used 70 minutes of CPU-time while PDI used 730 minutes of CPU time. This means that the average CPU usage for pygrametl was 70% while it was 300% for PDI (meaning that, on average, three cores were used concurrently by PDI). Further, pygrametl had 50MB resident in RAM while PDI had 3.3GB resident in RAM for that data set. Figure 4: Performance results. 10 Conclusion and Future Work In this paper, we presented **pygrametl** which is a programming framework for ETL programming. We challenged the conviction that ETL development is always best done in a graphical tool. We proposed to also let the ETL developers (that typically are dedicated experts) do ETL programming by writing code. Instead of “drawing” the entire program, the developers can concisely and precisely express the processing in code. To make this easy, **pygrametl** provides commonly used functionality such as data source access and filling of dimensions and fact tables. In particular, we emphasize how easy it is to fill snowflaked and slowly changing dimensions. A single method call will do and **pygrametl** takes care of all needed lookups and insertions. **pygrametl** is implemented in Python. Python was chosen because it offers convenient functionality to boost the programming efficiency (including object-oriented and functional programming) and a rich standard library. Our experiments have shown that ETL development with **pygrametl** is indeed efficient and effective. **pygrametl**’s flexible support of fact and dimension tables makes it easy to fill the DW and the programmer can concentrate on the needed operations on the data where (s)he benefits from the the power and expressiveness of a real programming language to achieve high productivity. We do, however, acknowledge that some persons prefer a graphical overview of the ETL process. Indeed, an optimal solution could include both a GUI and code. In future work, we plan to make a GUI for creating and visually connecting steps. **Roundtrip engineering** such that updates in the code are visible in the GUI and vice versa should be possible. We also plan to investigate how to provide an efficient and yet simple way to create and run ETL programs in parallel or distributed DW environments. It should be possible to plug in new DW and ETL servers and have the coordination done automatically. Acknowledgments This work was supported by the Agile & Open Business Intelligence project co-funded by the Regional ICT Initiative under the Danish Council for Technology and Innovation under grant no. 07-024511. References
{"Source-Url": "https://vbn.aau.dk/ws/files/18915819/dbtr-25.pdf", "len_cl100k_base": 14062, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 52693, "total-output-tokens": 16006, "length": "2e13", "weborganizer": {"__label__adult": 0.0002346038818359375, "__label__art_design": 0.0002357959747314453, "__label__crime_law": 0.0002073049545288086, "__label__education_jobs": 0.0006961822509765625, "__label__entertainment": 4.124641418457031e-05, "__label__fashion_beauty": 9.328126907348631e-05, "__label__finance_business": 0.0003898143768310547, "__label__food_dining": 0.00022125244140625, "__label__games": 0.000270843505859375, "__label__hardware": 0.0004489421844482422, "__label__health": 0.0002465248107910156, "__label__history": 0.0001556873321533203, "__label__home_hobbies": 6.502866744995117e-05, "__label__industrial": 0.000308990478515625, "__label__literature": 0.00013637542724609375, "__label__politics": 0.00014638900756835938, "__label__religion": 0.0002574920654296875, "__label__science_tech": 0.0095672607421875, "__label__social_life": 6.461143493652344e-05, "__label__software": 0.011566162109375, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.00013589859008789062, "__label__transportation": 0.0003185272216796875, "__label__travel": 0.00014126300811767578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67147, 0.0189]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67147, 0.56538]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67147, 0.91103]], "google_gemma-3-12b-it_contains_pii": [[0, 1332, false], [1332, 1505, null], [1505, 3343, null], [3343, 8634, null], [8634, 13760, null], [13760, 18169, null], [18169, 21056, null], [21056, 24989, null], [24989, 28513, null], [28513, 33607, null], [33607, 38636, null], [38636, 42854, null], [42854, 47203, null], [47203, 51714, null], [51714, 55579, null], [55579, 58967, null], [58967, 62311, null], [62311, 65789, null], [65789, 67147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1332, true], [1332, 1505, null], [1505, 3343, null], [3343, 8634, null], [8634, 13760, null], [13760, 18169, null], [18169, 21056, null], [21056, 24989, null], [24989, 28513, null], [28513, 33607, null], [33607, 38636, null], [38636, 42854, null], [42854, 47203, null], [47203, 51714, null], [51714, 55579, null], [55579, 58967, null], [58967, 62311, null], [62311, 65789, null], [65789, 67147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67147, null]], "pdf_page_numbers": [[0, 1332, 1], [1332, 1505, 2], [1505, 3343, 3], [3343, 8634, 4], [8634, 13760, 5], [13760, 18169, 6], [18169, 21056, 7], [21056, 24989, 8], [24989, 28513, 9], [28513, 33607, 10], [33607, 38636, 11], [38636, 42854, 12], [42854, 47203, 13], [47203, 51714, 14], [51714, 55579, 15], [55579, 58967, 16], [58967, 62311, 17], [62311, 65789, 18], [65789, 67147, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67147, 0.07494]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
9f5cb76e088d81c747fe714c8e468e50304d2277
[REMOVED]
{"Source-Url": "https://cs.unibg.it/gargantini/research/papers/ssbse16_valconstraints.pdf", "len_cl100k_base": 8257, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 38113, "total-output-tokens": 10582, "length": "2e13", "weborganizer": {"__label__adult": 0.00031757354736328125, "__label__art_design": 0.000324249267578125, "__label__crime_law": 0.0002880096435546875, "__label__education_jobs": 0.00060272216796875, "__label__entertainment": 6.347894668579102e-05, "__label__fashion_beauty": 0.00014841556549072266, "__label__finance_business": 0.0001214146614074707, "__label__food_dining": 0.0003046989440917969, "__label__games": 0.00064849853515625, "__label__hardware": 0.0007967948913574219, "__label__health": 0.0004091262817382813, "__label__history": 0.0001800060272216797, "__label__home_hobbies": 6.914138793945312e-05, "__label__industrial": 0.00027179718017578125, "__label__literature": 0.0002899169921875, "__label__politics": 0.00018155574798583984, "__label__religion": 0.0003862380981445313, "__label__science_tech": 0.018829345703125, "__label__social_life": 7.88569450378418e-05, "__label__software": 0.007415771484375, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.0002467632293701172, "__label__transportation": 0.0003528594970703125, "__label__travel": 0.0001685619354248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42162, 0.03856]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42162, 0.37436]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42162, 0.87203]], "google_gemma-3-12b-it_contains_pii": [[0, 2540, false], [2540, 5745, null], [5745, 8348, null], [8348, 11349, null], [11349, 13232, null], [13232, 16359, null], [16359, 19300, null], [19300, 22130, null], [22130, 24010, null], [24010, 26853, null], [26853, 29961, null], [29961, 32257, null], [32257, 35494, null], [35494, 38630, null], [38630, 42162, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2540, true], [2540, 5745, null], [5745, 8348, null], [8348, 11349, null], [11349, 13232, null], [13232, 16359, null], [16359, 19300, null], [19300, 22130, null], [22130, 24010, null], [24010, 26853, null], [26853, 29961, null], [29961, 32257, null], [32257, 35494, null], [35494, 38630, null], [38630, 42162, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42162, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42162, null]], "pdf_page_numbers": [[0, 2540, 1], [2540, 5745, 2], [5745, 8348, 3], [8348, 11349, 4], [11349, 13232, 5], [13232, 16359, 6], [16359, 19300, 7], [19300, 22130, 8], [22130, 24010, 9], [24010, 26853, 10], [26853, 29961, 11], [29961, 32257, 12], [32257, 35494, 13], [35494, 38630, 14], [38630, 42162, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42162, 0.07735]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
5bec6d074a005f6f2565c605f1887cf84636b848
Formal Verification of Source-to-Source Transformations for HLS Louis-Noël Pouchet Colorado State University Emily Tucker Colorado State University Niansong Zhang Cornell University Hongzheng Chen Cornell University Debjit Pal University of Illinois Chicago Gabriel Rodriguez CITIC, Universidade da Coruña Zhiru Zhang Cornell University ABSTRACT High-level synthesis (HLS) can greatly facilitate the description of complex hardware implementations, by raising the level of abstraction up to a classical imperative language such as C/C++, usually augmented with vendor-specific pragmas and APIs. Despite productivity improvements, attaining high performance for the final design remains a challenge, and higher-level tools like source-to-source compilers have been developed to generate programs targeting HLS toolchains. These tools may generate highly complex HLS-ready C/C++ code, reducing the programming effort and enabling critical optimizations. However, whether these HLS-friendly programs are produced by a human or a tool, validating their correctness or exposing bugs otherwise remains a fundamental challenge. In this work we target the problem of efficiently checking the semantics equivalence between two programs written in C/C++ as a means to ensuring the correctness of the description provided to the HLS toolchain, by proving an optimized code version fully preserves the semantics of the unoptimized one. We introduce a novel formal verification approach that combines concrete and abstract interpretation with a hybrid symbolic analysis. Notably, our approach is mostly agnostic to how control-flow, data storage, and dataflow are implemented in the two programs. It can prove equivalence under complex bufferization and loop/syntax transformations, for a rich class of programs with statically interpretable control-flow. We present our techniques and their complete end-to-end implementation, demonstrating how our system can verify the correctness of highly complex programs generated by source-to-source compilers for HLS, and detect bugs that may elude co-simulation. CCS CONCEPTS • Software and its engineering → Software verification. KEYWORDS Program equivalence, formal verification, high-level synthesis 1 INTRODUCTION Over the past decade, high-level synthesis (HLS) has been increasingly adopted for specialized hardware design targeting FPGAs and ASICs [9, 10, 27]. To achieve good performance, users apply source-to-source transformations on HLS algorithm specifications, necessitating substantial program rewriting and intricate compositions of program transformations. Such customizations for HLS, whether applied manually by a designer, or by (domain-specific) compilers for HLS such as AutoSA [43], HeteroCL [25] or the AMD Merlin compiler [47], may introduce subtle bugs that cannot be easily detected during testing or simulation. It remains a fundamental challenge to efficiently verify the correctness of a program optimized for HLS under a rich set of hardware-oriented optimizations. Modern HLS design process typically starts from an algorithmic description and undergoes a series of source-to-source transformations [32] before synthesis. The goal is to transform the algorithmic descriptions to hardware-friendly ones that can generate high-quality RTL. While some transformations can be expressed by adding vendor-specific pragmas, many crucial optimizations require substantial changes in control-flow structure, I/O approach, on-chip buffer management, function boundaries, and exposing concurrency. For example, to transform a matrix multiplication kernel into a high-throughput systolic array (SA), one needs to build a customized memory hierarchy, an array of vectorized processing elements (PEs), and an I/O network [43]. Finally, one needs to tune the design to maximize the performance while meeting the resource constraints, and enable parallel execution of the PEs. Every source-to-source transformation applied to the files supplied to the HLS toolchain for eventual synthesis involves extensive rewriting. This process is prone to errors and can be time-consuming to debug. We target the problem of verifying the correctness of source-to-source transformations by proving the semantics equivalence between two programs. We leverage two properties that often manifest in high-performance HLS designs: fixed kernel sizes and static control flow. Many spatial architectures have a fixed size, e.g. SAs [43]. Many hardware accelerated applications such as video processing, convolutional neural networks, and large language models typically have a static control flow. Technically we support any control flow whose branches can be unambiguously evaluated through concrete interpretation. This set of programs is formally defined as statically interpretable control-flow (SICF) programs in Sec. 3. In this work we propose a novel framework to prove the semantic equivalence between a pair of programs, where one program is a substitute for the other, built after applying source-to-source transformations for HLS. We overcome the difficulty to support a rich set of optimizations when proving their equivalence: our equivalence system is agnostic to how the code is implemented, be it in terms of storage or loop structure, communication scheme, etc. This future-proof design enables support for emerging optimizations. In particular, we can prove equivalence under any loop transformation approach, code refactoring, insertion of local/reuse buffers, insertion of FIFOs for communication, including blocking FIFOs for communication/synchronization using coarse-grain dataflow-style concurrency, etc. However, we note that by design our work is independent from any specific HLS toolchain: we verify the equivalence of two C/C++ source programs that are input to a HLS toolchain. Assuming the HLS backend used is correct, then whichever the HLS approach implemented, necessarily the two programs would compute the same output if given the same inputs, if we have verified them equivalent. We make the following contributions: - We present an end-to-end, fully implemented system to prove the equivalence between a pair of functions in the C/C++ language, under meaningful and practical restrictions. - We combine partial concrete evaluation of specific program parts with a symbolic analysis to make the system robust to a rich set of code transformations, including key optimizations for HLS, such as loop transformations, bufferization, data layout changes, blocking/non-blocking FIFO communications, etc. - We provide extensive experimental evaluation, demonstrating the ability of our system to quickly verify the correctness of advanced optimizations, including AutoSA-generated matrix-multiply 64x64 systolic array designs over 145,000 lines of code, the inference of a full BERT layer optimized for FPGAs, or a variety of numerical kernels optimized with HeteroCL, in seconds to minutes while using a single CPU core. 2 APPROACH TO VERIFICATION We now outline our approach to proving the equivalence between two programs, which is designed with the following considerations. We target optimized loop-based functions, with the objective of being mostly independent from the syntax used to implement these functions, and reasoning instead on the semantics of the program computations. The class of program we support, which includes for example affine programs [28], encompasses a broad range of applications such as linear algebra, image processing, data mining, machine learning, physics simulation, and more [37], as well as modern deep learning inference computations [25, 35]. Our coverage extends way beyond programs that are syntactically analyzable as polyhedral programs: in Sec. 3, we define for the first time a novel class called Statically Interpretable Control-Flow (SICF) programs, which we handle in time and space linear with the number of operations executed in the programs. As our approach is mostly agnostic to the syntax used, that is how the program has been written, we cover a wide variety of program transformations that are typically implemented for high-performance HLS designs — arbitrary loop transformations, arbitrary statement transformations, and arbitrary storage and data transfer approaches (e.g., scalarization, local and multi-buffering, data transfers using FIFOs). To the best of our knowledge, this is the richest set of code transformations supported in a single automated program equivalence tool. Finally we target a practical system capable of formally proving equivalence, subject to a meaningful set of restrictions, while maintaining high throughput for proof computation — our formal verification system offers roughly the performance of code simulation, processing at approximately 0.5 million statements per second, utilizing just a single CPU core. We immediately set a number of restrictions on the class of programs we support for our system to be able to prove their equivalence. First, we do not prove equivalent arbitrary pairs of programs: we specifically reason only on a pair of programs \( P_A, P_B \) such that \( P_B \) is meant to replace \( P_A \) in a larger program. That is, these two programs are necessarily called with the exact same environment [12], for every possible execution. Second, as we require the control-flow to be statically interpretable, a looser condition than for static analyses where the control-flow shall be statically analyzable (e.g., using polyhedral structures [15]), we typically require the problem sizes to be known at compile-time. We do not support parametric loop nest analysis. This requirement arises also when performing co-simulation or testing, and can be partially alleviated by proving equivalence once for each element in a set of problem sizes. With these objectives and restrictions in mind, we now introduce the key principles of our approach, each developed later in this paper. We illustrate the concepts to prove equivalent the pair of simple programs shown in Lst. 1. For clarity we use explicitly the AMD Vitis HLS semantics for FIFOs and dataflow region declaration, but our approach is not specific to any HLS toolchain in particular. // Program \( P_A \), the original program: ```c void matvec(float* restrict A, float* restrict x, float* restrict y, int N) { for (int i = 0; i < N; ++i) { y[i] = 0; for (int j = 0; j < N; ++j) y[i] += A[i*N + j] * x[j]; } } ``` // Program \( P_B \), a replacement for program \( P_A \): ```c typedef hls::stream<float> fifo_t; void data_in(fifo_t & fifo_A, fifo_t & fifo_x, float* A, float* x, int N) { for (int i = 0; i < N; ++i) { fifo_A.write(A[i]); for (int j = 0; j < N; ++j) fifo_x.write(x[j]); } } ``` // Program \( P_B \) core: ```c void matvec_core(fifo_t & fifo_A, fifo_t & fifo_x, float* A, float* x, y, int N) { for (int i = 0; i < N; ++i) x_buff[i] = fifo_x.read(); for (int i = 0; i < N; ++i) { float y_temp = 0; int j; for (j = 0; j + 1 < N; j += 2) { y_temp += fifo_A.read() * x_buff[j]; y_temp += fifo_A.read() * x_buff[j+1]; } y[i] = y_temp; } } ``` Listing 1: Illustrating example: dense matrix-vector product. As shown later we cover a complex range of code and data transformations, however this example already illustrates changing I/O, storage, statements, and loops in the program. We are not aware of any tool that can currently prove the equivalence between these programs within a single framework: for example, both ISA [42], based on static analysis, and PolyCheck, based on a more general dynamic analysis [6] would fail to prove equivalence: statements cannot be matched between the two programs as they differ in storage. To prove that \( P_A \) is equivalent to \( P_B \) for the calling context considered (which provides the problem sizes here), our approach operates as follows: - We aim to build a symbolic canonical representation of the computation that is performed to produce the value of each memory cell that is live-in/live-out for the program. In the case of well-defined functions without side effects, the class we support, this set of cells is captured in the function arguments. This representation shall be independent of which statement(s) were used to produce the computation, as well as any temporary storage implemented. This is presented in Sec. 4. - We prove equivalence by computing if this canonical representation is identical between both programs, for every live-in/live-out memory cell. This is presented in Sec. 5. - To be robust to “any” implementation of the program and its control-flow, we rely on a partial concrete interpretation for the program, which will concretely evaluate control-flow expressions and simplify them as possible. When an expression cannot be concretely evaluated, it is automatically promoted to symbolic representation, during interpretation. If the interpreter reaches the end of the program control, then and only then can we prove the programs equivalent, if their per-cell computation representations are fully identical. This is presented in Sec. 3. - We prove equivalence when using FIFOs of a given depth, non-blocking and blocking, sequentially and with coarse-grain dataflow-style concurrent execution of functions that write/read the FIFOs, as in programs generated by AutoSA. This is presented in Sec. 6. We build a representation of the computation producing a value stored in memory cell (e.g., \( y[0] \)) in the form of a graph, specifically a computation directed acyclic graph (CDAG) [14, 34]. We formally define CDAG in Sec. 4.1. Fig. 1 shows an excerpt of CDAGs built for program \( P_B \). Every variable in the program which can be concretely evaluated is, that is expressions such as \( A[0] \) are replaced by their result during interpretation, giving values 0, 1,.., which are used to identify the memory cells being addressed. When an expression cannot be computed, for example because it uses a live-in, unknown value such as \( A[0] \), the expression is automatically promoted to a symbolic representation: its CDAG. At every assignment the current CDAG is stored, so that it can be used as replacement for the next use of the variable. The process is repeated for every iteration of \( j \), and one CDAG per \( y[i] \) is eventually created. When interpreting \( P_B \) sequentially, we emulate the FIFO API by implementing \( \text{fifo.read()} \) and \( \text{fifo.write()} \) via a simple array \( \text{fifo[]} \) and its start/end positions in \( C \), which is processed by the interpreter. We discuss in Sec. 6 the details of verifying FIFOs, sequentially or in dataflow-style mode. Note this construction process is agnostic to how storage and computations are implemented: creating scalars and different loop nests simply leads here to building the same CDAG that is eventually stored in \( y[i] \) for both \( P_A \) and \( P_B \). If and only if the control-flow interpreter has reached the end of the program control, we have built CDAGs for every live-in/live-out memory cell touched by the program. We can then compute their equivalence by checking, cell by cell, whether the CDAGs are fully isomorphic. If so, we have proven the programs are equivalent. We can catch errors in the loop nests, handling of FIFO (e.g. incomplete data, deadlocks, etc.), in how statements are scheduled wrt. dependences, etc. Here we prove \( P_A \) and \( P_B \) equivalent for \( N = 100 \) (and any \( A_i \), \( x \)) in 0.4s. Usually, transformations should not alter the number and relative order of dependent operations for the isomorphism check to be successful. However as discussed in Sec. 5, our post-normalization of the CDAG enables the support of transformations exploiting associativity/commutativity of operations, as well as some changes in the set of operations computed, if a set of semantics-preserving rewrite rules is agreed on. --- **Figure 1: Excerpt of interpretation for \( y[0] \) with \( N = 2 \) for \( P_B \)** — When interpreting a statement, first every variable referenced is replaced by its content from the interpreter memory, if any. Integer sub-expressions are then simplified with concrete evaluation, and the result is stored in memory: B1 assigns 0 to \( y_{\text{temp}} \). Then the first \( j \) loop is interpreted, storing 0 for \( j \), testing \( 2 < 2 \), transferring control to the next \( j \) loop. It tests 0 < 2 and moves to interpreting its body. For B2, in the RHS \( y_{\text{temp}} \) is replaced by its current value, 0, \( \text{fifo.A.read()} \) is replaced by its first written element, the symbolic (live-in) value \( A[0] \), as well as \( x[0] \) for \( x_{\text{buff}[0]} \). As these values are symbolic, the entire expression is symbolic, stored as a CDAG for \( y_{\text{temp}} \). When B3 is interpreted, we replace \( y_{\text{temp}} \) by its CDAG from memory, creating a new CDAG, now stored for \( y_{\text{temp}} \). Once loop \( j \) terminates, B4 assigns the CDAG of \( y_{\text{temp}} \) to the live-out \( y[0] \). 3 AST-BASED HYBRID INTERPRETER We now present our concrete interpreter to automatically build CDAGs for programs. We have implemented support for a large subset of the C/C++ language, within the PAST [3, 36] library. PAST is a generic, language-independent Abstract Syntax Tree library, equipped with a parser from C to PAST, built using flex/bison ANSI C grammars by Lee and Degener [4]. 3.1 Architecture of the Interpreter The interpreter operates on a PAST tree representing an input, compilable program. Contrary to a full C interpreter, it does not require a complete program to be provided: it supports any code region, and functions with their definitions provided. Any value which is not computable during interpretation will be considered symbolic, allowing the interpreter to proceed even without the concrete data the program operates on. The interpreter is made of: - An AST traversal mechanism, that implements all control-flow operations of the program, including function calls, variable declarations, etc. This is presented in Sec. 3.2. - A concrete expression evaluator, used to evaluate and simplify expressions typically used for control-flow and array subscript expressions. This evaluator must implement the same exact concrete semantics as the target hardware for which programs are proved equivalent: identical overflow behavior, support of different bitwidths, etc. This is presented in Sec. 3.4. - A memory storage system, to store concrete and symbolic values for every variable touched during interpretation, this includes temporary/local variables. We implement a dynamically allocated sparse tensor approach to this end, this is presented in Sec. 3.3. - A CDAG building system, which manipulates symbolic expressions building them by partial evaluation, presented in Sec. 4. 3.2 AST Traversal for Interpretation Our interpreter traverses the program AST following C/C++ execution conventions. It terminates when there are no more instructions to interpret, that is, it has reached the end of the control-flow of the region analyzed. We support most classical C constructs: for, while, do-while, if, switch, return, break as well as function definitions and function calls. When a function is called, its definition must be available in the region analyzed, and control is simply transferred to this function. We support pass-by-value and pass-by-reference for function calls. We require that all functions’ arguments are restrict to ensure no aliasing, as we do not perform any aliasing analysis and assume different named arguments point to different memory regions. We support variable declarations, and a very limited form of pointer arithmetic and type casting currently, however this is only a limitation of our current implementation, since supporting those in full does not pose any particular technical challenges. 3.3 Interpreter Memory The interpreter maintains a memory for the program. Every variable (or memory cell for an array) accessed during the program has associated storage, where we store (a) whether the variable is (currently) symbolic or concrete; (b) the concrete value or the symbolic CDAG representing the expression to compute that variable; (c) whether the variable is live-in or not (that is, it is being read before being written); and (d) statistics useful for subsequent program optimizations, such as the number of reads/writes to the variable. To control memory storage size, and especially in the case of manipulating arrays, we implement a dynamically allocated sparse tensor for the memory. As we support C99 multidimensional arrays, whose size is not necessarily known at compile-time, when an array is sparsely accessed (e.g., the only element touched during execution is A[42][51]) storing its dense representation $0 - 42 \times 0 - 51$ would consume unnecessary memory. 3.4 Concrete Expression Evaluation Equipped with a traversal mechanism and memory for the interpreter, we can now evaluate a program. A fundamental aspect of our system to be able to prove equivalence is the availability of a concrete expression evaluator that implements exactly the same concrete semantics as the target hardware. Indeed, in a nutshell, our interpreter will replace expressions like $j = 0; j++;\ print(j)$; by $\ print(1);$. This simplification is what makes our system robust to any implementation of the control-flow, and is fundamental for its execution time, but it requires a verified implementation of the concrete evaluator. This expression evaluator shall carefully implement the exact same behavior as the target architecture, for example as described in documentation of the HLS framework being used. Issues of overflow, handling of various bitwidths, type promotion, etc. must be exactly implemented as it would behave on the target hardware for our system to conclude a proof of equivalence, as otherwise different behavior may be observed when running on the concrete target hardware. In this work we implemented the concrete semantics of the C language for integer expressions, supporting all available unary and binary operators, including bitfield manipulation. Our PAST-based integer expression evaluator is implemented using about 300 lines of C, making its manual verification accessible. We are however dependent on the compiler and test machine to correctly execute this program bug-free; using certified compilers such as CompCert [29] may be desired for increased confidence. 3.5 Overview Algorithm for SICF Interpretation Algorithm 1 below outlines our approach to hybrid concrete/symbolic interpretation. Our implementation behaves identically, but is optimized for speed and minimizes the work done for each statement, reusing prior computations whenever possible, and caching per-statement analysis. We remark the complementary nature of approaches such as KLEE [38] which computes a set of possible execution paths to support “symbolic” control-flow, by interpreting LLVM bytecode, but is mostly limited to integer symbolic variables; or Alive2 [31] which can expose fine bugs in LLVM programs. We target coverage of the “converse” case: supporting equivalence of symbolic expressions especially for floating-point operations or any well-defined type for symbolic variables, but requiring the control-flow can be completely computed by concrete interpretation at compile-time. Combining both approaches is the subject of future work. It leads to the following definition of the class of program we support: We now present our approach to CDAG construction, by means of where the operands are named. That is, it is possible to first build a CDAG captures an expression that computes a single value [14]. This restriction serves as a sufficient condition for our purposes. ``` storeAsConcreteValue(writeLocation, Expr) ``` ``` else storeAsSymbolicCDAG(writeLocation, Expr) ``` ``` Expr := BuildCDAGByReplacement(Expr) ``` as illustrated in Fig. 1. age, and only represent a computation, not how it is implemented, between concrete and symbolic values. integrating their construction with program interpretation, we the same CDAGs as we implement in this work. However, by finely ``` intervening their construction with program interpretation, we the same CDAGs as we implement in this work. However, by finely ``` ``` Intuitively, they can be built from a set of ordered instructions, as detailed in Sec. 5, we limit the set of variables considered for equivalence checking to the live-in/live-out values of the program. This restriction serves as a sufficient condition for our purposes. ``` ``` 4.2 Procedure for CDAG construction ``` ``` 4.2 Procedure for CDAG construction ``` ``` We aim to build a CDAG for a variable incrementally, by interpreting expressions in the order they would be executed by the program. We first clone the AST of the original expression in the program, e.g. \( y[1] + A[1+N+j] \times x[j] \), and progressively rewrite it. ``` ``` • We traverse the AST of the expression in postfix, and for every operation which has only concrete value(s) as operand we invoke the concrete expression evaluator, and replace the associated subtree with this concrete value. When a concrete variable is referenced in an expression, it is first replaced by its current concrete value from the interpreter memory. ``` ``` • We then re-traverse this modified AST in postfix, now simplified from integer computations, e.g. the tree is now \( y[0] + A[0] \times x[0] \). For every variable left referenced in the AST, we replace it by its known current CDAG, if any, otherwise we create a symbolic node modeling this value, e.g. \( A[0] \) and \( x[0] \). ``` ``` During this process, every symbolic variable being referenced has been replaced by the expression which computes its value. By design, the resulting tree can only refer to symbolic values (typically live-in data) and numerical constants. More specifically, we have: ``` 4.3 Complexity Considerations ``` ``` 4.3 Complexity Considerations ``` ``` CDAGs built as described above by repetitive replacement of variables referenced by their known CDAG so far have the fundamental property of being agnostic to how the computation is implemented, including if using local storage, however this comes at an important complexity cost: for a program that executes \( O(n) \) operations, its CDAG will have at least \( O(n) \) nodes. Taking for example matrix multiplication, it has \( O(n^3) \) FMA operations, therefore the CDAG will have at least \( O(n^3) \) nodes to represent these operations. ``` ``` There exists also a degenerate case that can make CDAGs grow exponentially, for example when a variable is used at multiple places in the same expression, itself being in a reduction-style computation: \( s := s + s + s \) being repeated under a loop, leading to a final CDAG of \( O(3^n) \) in theory. In our implementation we address this problem by ensuring the space complexity remains roughly \( O(n) \) even in this case, using pointers and caching (shallow copies) to represent identical subtrees. ``` ``` Finally, the complexity of our implementation for CDAG construction, both in time and space, is typically \( O(n) \) for a program executing \( O(n) \) operations for sequential verification. This aspect is fundamental to the performance of verification, as shown in Sec. 7. ``` ``` ``` ``` ``` ``` ``` ``` `` 5 EQUIVALENCE CHECKING We now describe equivalence checking for sequential programs, and report possible bugs to the user and their location otherwise. Concurrency checking is presented in later Sec. 6. Intuitively the process amounts to checking full isomorphism of the CDAGs produced by $P_A$ and $P_B$ for the same memory cell, this for all memory locations that are live-in and live-out for the programs, that is, memory locations reachable outside the function(s) checked for equivalence. We also support a variety of rewrite-rule based normalizations to increase equivalence coverage, including support of transformations altering the order and count of operations. 5.1 Theoretical Foundation By construction, our system can prove the equivalence of a pair of programs, under the hypothesis that all global arrays and function arguments do not alias. We require the ability to match data that is live-in (that is, read first before being written) and live-out (that is, alive after the program region terminates) between $P_A$ and $P_B$; this is easily achieved by encapsulating the regions to analyze in a single entry function with well-defined arguments. **Theorem 2 (Equivalence of Programs).** Given two programs such that the interpreter terminates by reaching the end of their control-flow without error. They are semantically equivalent if, for every memory cell that is live-in/live-out to the programs, the CDAGs produced by each program for that cell are semantically equivalent. The proof requires that $P_B$ replaces $P_A$ in a larger program, ensuring the same execution context necessarily for both. It requires that the integer expression evaluator implements exactly the concrete semantics of the target hardware, making code replacement via partial concrete evaluation in the program necessarily equivalent to the code before evaluation. As the interpreter can only terminate if exclusively concrete values are used in the control-flow and array subscripts, no other execution path than the one interpreted can exist for the given input programs. CDAGs are built by successive equivalent replacements, if the process terminates they correspond exactly to the output of the CDAGs after normalization only indicates the programs are equivalent under any temporary data. If CDAGs are identical for the same memory cell for both programs, then they must produce the same output value for this cell, they are equivalent if this is true for every live-in/out memory cell. However, our framework does not prove non-equivalence in general. For example the absence of a rewrite rule in the system to show that $\text{pow}(x, 2)$ is equivalent to $x^2$ would prevent proving these two expressions equivalent. While our proof of equivalence holds for any superset of the semantics considered, exposing differences in CDAGs after normalization only indicates the programs may not be equivalent under a subset of the semantics. **Corollary 1 (Non-Equivalence of Programs).** Given two programs such that the interpreter either fails to terminate, or such that their CDAGs are not shown to be semantically equivalent, then these two programs may be semantically equivalent. To compute the isomorphism of CDAGs, we simply for each CDAG perform a merged prefix+postfix collection of its nodes to form a vector of size $2n$ for a CDAG of $n$ nodes, and check the strict structural equality of the two vectors obtained. 5.2 CDAG Normalizations Our system can natively detect equivalence when the count and order of operations performed to produce a CDAG is identical for both $P_A$ and $P_B$. That is, $y[1] = A[x][N+n] \times y[j]$ is not equivalent with $y[1] = x[j] \times A[1][n+n+j]$; the CDAGs are not identical. To handle a wider class of equivalences, we augment the system with a post-normalization of CDAGs, if checking their isomorphism originally fails. This post-processing has a polynomial time complexity, while so far the entire processes presented had a time complexity linear in the number of operations executed by the programs. Specifically, the user can declare valid semantics-preserving rewrite rules to be applied on the CDAGs. For example, commutativity may be allowed for floating point operations: $f_{fp}(a, b) \leftrightarrow f_{fp}(b, a)$ where $a, b$ are arbitrary subtrees. Then, for every rewrite rule provided, which may include rules that change operation count such as distributivity/factorization, we modify the CDAGs to apply them greedily until no more change can be achieved. This does not ensure completeness, but computes quickly enough to maintain practicality for the framework. In contrast, techniques like equality saturation [45] may be able to saturate the representation and bring completeness, but at a very high computational cost that is impractical for trees of thousands/millions of nodes as we manipulate. Note for the special case of associativity/commutativity, we implement a much faster approach, by simply sorting every commutative node in the CDAG and their children using a lexicographic ordering of the subtrees, leading to a canonical CDAG representation under associativity/commutativity. 6 VERIFICATION WHEN USING FIFOS We now present our approach to verifying the insertion of FIFOs to communicate data between functions, and the associated program restructuring, is correct. We distinguish two cases: a sequential verification approach, where we assume FIFOs are of infinite depth; and a dataflow-style verification approach, where we assume FIFOs are of a finite depth, and functions manipulating FIFOs appear in a dataflow-style region such that they execute concurrently, being activated until waiting for data, based on FIFO readiness and use. 6.1 FIFOs in Sequential Verification Our approach is a straightforward extension of our sequential verification approach presented earlier. We rely on the assumption that functions producing data appear prior to functions consuming that data in sequential execution order, a realistic assumption e.g. in the AMD Vitis toolchain for hls::stream FIFOs. We assume here FIFOs with infinite size, hence non-blocking writes. We substitute the API calls in the program for reading/writing FIFOs by our own emulating read/write implementation, replacing them simply using (a) a self-growing array of same type as the FIFO, and a start/end position pointer; and (b) for every write we write the element to this array at the first available position end, then increasing it, and for reads we do the converse with start. This simple approach is not able to catch concurrency bugs such as deadlocks or races, in contrast to our dataflow-style verification below, however it allows to catch bugs in program restructuring and how the FIFO is used, while maintaining our target complexity of $O(n)$ time/space for $O(n)$ statements executed in the program. 6.2 FIFOs in Dataflow-style Programs Handling FIFOs of a fixed depth in a dataflow-like implementation creates the need to handle a form of concurrent execution between functions, to expose possible deadlocks and race conditions. We limit in this work to supporting a coarse-grain dataflow-style of concurrency, for example coarse-grain #pragma HLS dataflow regions as supported by Vitis. Yet our work paves the way for support of more general concurrency between functions, as to address this problem we (a) made our interpreter robust to multitasking and interrupt-based halt/resume of function interpretation; and (b) designed a novel approach to interpretation that involves heavy memory snapshots and bookkeeping so that our verification holds for any possible valid concurrent schedule that can be executed, this while interpreting a single of these valid concurrent schedules. Concurrent-capable interpreter. We equip our sequential interpreter with two important features. First, the ability to schedule a function to execute from a list of functions that are executing concurrently (e.g., those listed under the dataflow-style pragma). Second, the ability to interrupt and resume a particular function, at any point: this means we can implement waiting/interrupting a function until a particular semaphore has reached a value, and resume it when it did. We use this interrupt and semaphore approach to represent the blocking FIFO read/write, using semaphores to capture when a FIFO is ready to read or write. Verification of concurrent functions. Our approach is fully implemented, and available in our open-source verification tool. Due to space constraints, we limit to passing the key intuitions regarding computing the earliest schedule of a concurrent program, and using this information to check for the absence of deadlocks and the absence of any read/write conflicts: data being read/written by different concurrent functions at possibly the same time in some valid concurrent schedule, and not present proofs here. Concurrent verification approach. By design, our approach is fully independent of the target HLS toolchain used, we therefore implement a conservative verification: if there exists a schedule that can deadlock or race, amongst all possible valid scheduling or HLS approaches used, then we report so. In particular, we assume every instruction may execute in zero cycle, and only blocking reads/writes may trigger the necessity for some operations to happen before some others. That is, all synchronizations between concurrent functions that read/write to the same shared memory location must be explicit in the program, by using a blocking FIFO to synchronize these read/writes. To implement FIFOs, we use semaphores to communicate their readiness state between concurrent functions. FIFOs are implemented as arrays as for the sequential case, we now aim to ensure no deadlock nor read-write conflict. Definition 3 (Semaphore and Timestamps). A semaphore is a counter with value \( v \) that represents the number of elements in a blocking FIFO with depth \( d \); a blocking read is possible when \( v > 0 \) and a blocking write is possible when \( v < d \). A timestamp \( T_s \in \mathbb{N} \) is assigned to every semaphore \( S \) used in function \( f \). Its value is 0 at start, and is incremented when a blocking read/write operation on \( S \) in \( f \) becomes ready. We use \( T_s \) to build a monotonically increasing timestamp \( T_{S,f}(s) \) for all interpreted operations \( s \) (in their sequential order) in \( f \) that depend on the readiness of \( S \), and perform subsequent checks for deadlocks and read-write conflicts. We have: **Definition 4 (Minimal Timestamp and concurrency).** Given two interpreted operations \( s_1, s_2 \) in the full program. Given \( T_{M,f}(s_1) = \min_S(T_{S,f}(s_1)) \) the minimal timestamp of \( s_1 \) in \( f \) (resp. \( s_2 \) in \( f_2 \)). If \( T_{M,f}(s_1) < T_{M,f}(s_2) \) then necessarily under any valid execution of the program \( s_1 \) must complete its execution before \( s_2 \) starts. Conversely, if \( T_{S,f}(s_1) = T_{S,f}(s_2) \) then there is a possible concurrent schedule executing both operations at the same time. It is therefore sufficient to check the absence of read-write conflicts by considering any operation that accesses data shared between concurrent functions if they have the same timestamp: **Definition 5 (Read/write conflict).** Given \( s_1, s_2 \) such that they access the same shared memory location, that is data non-local to the concurrent functions. If \( \exists T_{S,f}(s_1) = T_{S,f}(s_2) \) and one of these accesses is a write, then a read-write conflict can occur on a possible concurrent schedule. If all concurrent functions have the same timestamp for each semaphore for every operation, they can be executed in any order and the semaphore values will be valid, but this is not the case otherwise. We must restore the semaphore values appropriately before resuming a concurrent function. This bookkeeping is done for all timestamp values associated to shared variables accesses. It increases space consumption, but at the benefit of allowing to interpret a single (specific) concurrent schedule while still proving correctness for all possible valid schedules. Executing all functions concurrently requires storing the value history for semaphore \( S \) from \( T_{\min} \ldots T_{\max} \), where \( T_{\min} \) and \( T_{\max} \) are the current minimum and maximum timestamp values for \( S \) over all concurrent functions. Before resuming interpretation of a function the value of each semaphore at the relevant timestamp can be restored, and read-write conflicts can be checked across a range of histories for a particular memory cell, allowing to emulate fully concurrent execution sequentially. Details are available in our implementation [36]. This backup/restore step is fundamental for the general correctness of our approach, however it incurs a low-polynomial complexity, slowing the verification, as shown in Sec. 7 below. Finally, deadlocks are detected using a simple approach: if the interpreter has one or more currently executing functions (i.e., not completed by reaching the end of their control-flow) that cannot make further progress, because of a blocking operation which has not reached a ready-state, then we report a deadlock: **Definition 6 (Deadlock).** Given \( f_1, \ldots, f_n \) a set of concurrently executing functions. If \( \exists f_i \) in a blocking state which depends on semaphore \( S \) and no other \( f_j \) can update \( S \), either because they completed or because they do not modify \( S \), then the program deadlocks under a possible concurrent schedule. We remark the importance of histories on semaphore values and restoring them before resuming a concurrent function, to consider all possible values for a semaphore to change state, and therefore conclude the absence of deadlocks. Table 1: AutoSA Systolic array verification results. Left is sequential-only verification, right uses blocking FIFOs. <table> <thead> <tr> <th>Array Size</th> <th>LoCs</th> <th>#Stmts</th> <th>#Nodes</th> <th>Time</th> <th>Mem.</th> <th>#Workers</th> <th>#FIFOs</th> <th>#Stmts</th> <th>#Nodes</th> <th>Time</th> <th>Mem.</th> </tr> </thead> <tbody> <tr> <td>2x2</td> <td>1.1k</td> <td>1.7k</td> <td>44</td> <td>0.01s</td> <td>4MB</td> <td>22</td> <td>31</td> <td>3.3k</td> <td>44</td> <td>0.01s</td> <td>5MB</td> </tr> <tr> <td>4x4</td> <td>1.6k</td> <td>7.5k</td> <td>304</td> <td>0.01s</td> <td>5MB</td> <td>56</td> <td>91</td> <td>17k</td> <td>304</td> <td>0.02s</td> <td>7.5MB</td> </tr> <tr> <td>8x8</td> <td>3.5k</td> <td>41k</td> <td>2.2k</td> <td>0.05s</td> <td>11MB</td> <td>172</td> <td>307</td> <td>109k</td> <td>2.2k</td> <td>0.11s</td> <td>21MB</td> </tr> <tr> <td>16x16</td> <td>10.5k</td> <td>268k</td> <td>17k</td> <td>0.32s</td> <td>46MB</td> <td>596</td> <td>1k</td> <td>787k</td> <td>17k</td> <td>1.05s</td> <td>132MB</td> </tr> <tr> <td>32x32</td> <td>37.6k</td> <td>1.9M</td> <td>134k</td> <td>2.76s</td> <td>447MB</td> <td>2.2k</td> <td>4k</td> <td>6.2M</td> <td>134k</td> <td>27.9s</td> <td>1.6GB</td> </tr> <tr> <td>64x64</td> <td>144.6k</td> <td>14M</td> <td>1.06M</td> <td>24.1s</td> <td>5.9GB</td> <td>8.5k</td> <td>16k</td> <td>54M</td> <td>1.06M</td> <td>16m</td> <td>34GB</td> </tr> </tbody> </table> Example and Discussions. Our approach provides a conservative analysis of read-write conflict and deadlocks: if there exists a possible schedule under which these occur, we report so. We illustrate with a simple example: ``` float x; // shared fifo_t f; void foo(int a) { a *= 1; x += 1; f.write(42); } void bar(int a) { a *= 3; f.read(); x *= 3; } ``` Suppose we execute foo and bar concurrently. To ensure that reads and writes to the shared variable x do not execute at the same time, under any possible schedule, inserting a blocking FIFO write/read is sufficient to synchronize them. Specifically, operations of foo are assigned a timestamp, 0, for all. Similarly for bar, up to the blocking read. When the read changes status (data is available to read after interpreting f.write(42)), the timestamp for the next operation in bar is incremented to 1. When checking whether a read/write conflict exists, we have the tuples foo (0, x, read) and bar (1, x, write) being distinct, so no conflict is reported here (1 ≠ 0). Note the final CDAG for x is (x + 1) * 3 here. But suppose the f.write and f.read are absent. We assume zero latency for operations: the accesses to x would occur at the same virtual timestamp: 0, for both foo and bar, and we would report a read/write conflict on x; foo (0, x, read) and bar (0, x, write) have identical prefix and a read-write sequence. However when synthesizing this program with a particular HLS toolchain, operations have non-zero latencies, and accesses to x may happen at different cycles even without the blocking FIFO. It is enough to have a latency of 1 cycle for + and 10 for * (assuming data accesses are achieved in 1 cycle) for this program to execute foo before accesses to x in bar are executed, leading to a deterministic execution. Now suppose only f.write is absent. Interpreting foo runs to completion, however bar is actively waiting (blocking read) on f. As it cannot further change state, we report a deadlock as per Def. 6. Our approach is a conservative analysis for concurrency correctness, which can incur false negatives: we may report conflicts that could be addressed by actual timing in the final design. We never generate false positives: if we report the absence of deadlocks and read/write conflicts, then under any possible valid concurrent schedule or HLS approach, the programs cannot have any conflict. 7 EXPERIMENTAL RESULTS All experiments are performed on Intel Alder Lake Core i9 12900K, with 128GB of RAM, 30MB cache and running at 5.2GHz single-core frequency. All verification experiments use a single CPU core. We use AMD Vitis HLS v2022.1 to simulate and synthesize designs. 7.1 Verifying A Systolic Array Compiler Systolic array compilers generate high-performance systolic designs from high-level functional descriptions [11, 26, 40, 43]. However, formally verifying the generated designs remains a challenge due to the complex transformations during compilation and the dataflow parallel nature of systolic architectures. We generate systolic arrays of different sizes for matrix multiply kernels with AutoSA [43], and verify the generated HLS program against the input high-level functional description in C, which is a 5-line matrix-multiply kernel. Tab. 1 lists the verification results. On the left, we present sequential-only verification, and when considering fixed-depth FIFOs using coarse-grain dataflow on the right. The number of Lines of Code (LoC) in the input program, number of statements interpreted (Stmts), number of nodes in the final CDAGs for the live-in/out variables checked for equivalence (Nodes), time to complete interpretation of this file, and maximal memory used during the process. We note the significant time and memory cost of performing deadlock/race detection for any valid concurrent schedule, the number of concurrent Workers and number of FIFOs is displayed. Note the time to interpret the original 5-line matrix-multiply kernel, and verify the equivalence of CDAGs, is negligible here. It amounts to 2.5s for 64x64, less than 1s for all others. We have also manually inserted bugs in the code, to validate that our tool can successfully catch them. No bug was found in the codes produced by AutoSA. 7.2 Verifying Customizations in HeteroCL HeteroCL is a domain-specific compiler with decoupled customizations for hardware accelerator designs [16, 25, 46]. PolyBench/HCL: We implement and customize the PolyBench polyhedral benchmark suite [37] with HeteroCL, and verify the customized kernels. PolyBench consists of 30 kernels covering data mining, linear algebra kernels and solvers, and stencil kernels. We customize the kernels with optimizations listed in Tab. 3. We choose the medium kernel sizes for verification to demonstrate real-world problem sizes. The number of statements of the medium-size benchmarks ranges from 239K (jacobi_1d) to 1.6 billion (Floyd_warshall), the median number of statements is 22M (heat_3d). Verification time ranges from 1.1 second to 2.1 hours, with a median run time of 192s. The memory footprint ranges from 0.1 MB to 172 GB, with a median memory footprint of 3.5 GB. BERT: Transformer on FPGA Accelerator: Transformer delivers state-of-the-art performance for various tasks in NLP and vision [41]. The building block of transformer models is matrix-multiplication, which provides abundant opportunities for hardware acceleration [22]. We build an FPGA accelerator for the BERT-base model [13] with 12 attention heads, an input feature dimension of 768, and a hidden dimension of 3072 in the feed-forward network. The BERT accelerator is implemented with HeteroCL, and customized with optimizations listed in Tab. 3. We deploy the accelerator on an AMD U280 FPGA with three Super-Logic Regions (SLRs). To meet the timing requirement at the routing stage, we add an additional customization to establish new function boundaries to group kernels and assign them to each SLR. The BERT accelerator verification executes 1.37B statements and checks 693M CDAG nodes, taking 27 minutes, and has a memory footprint of 56.9 GB. Table 3: HLS optimizations considered in evaluation. <table> <thead> <tr> <th>Optimization</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>reorder</td> <td>Loop reordering</td> </tr> <tr> <td>tile</td> <td>Loop tiling</td> </tr> <tr> <td>stream</td> <td>Use FIFO streaming between two HLS kernels</td> </tr> <tr> <td>line/window buffer</td> <td>Insert reuse buffers to cache rows/columns of input matrix</td> </tr> <tr> <td>write buffer</td> <td>Insert write buffers to cache partial results</td> </tr> <tr> <td>double buffer</td> <td>Create ping-pong buffers and alternating read/write logic</td> </tr> <tr> <td>unify</td> <td>Unify multiple functions for resource sharing</td> </tr> <tr> <td>layout</td> <td>Transform memory layout</td> </tr> </tbody> </table> 7.3 Verifying Intel HLS Examples Since we verify transformations before HLS, our verification method is not limited to any specific HLS tools or vendors. We verify the example HLS designs from Intel HLS [18] against their C reference program in the testbench. The Intel HLS sample designs consist of five kernels: counter, image downsampling, interpolation and decimation filters, Gram-Schmidt QR factorization, and YUV-to-RGB color space conversion. The customizations of the HLS kernels include loop reordering, unrolling, and customized storage implementation. For example, the interpolation and decimation filter kernels use a temporary partial delay line to break loop-carried dependency. We verify all five cases in under 2 minutes. The example with the largest problem size is QR factorization, which takes 63.6 seconds to verify, and has a memory footprint of 1.4GB. 7.4 Verifying Compositions of Optimizations Some optimizations do not change the program semantics alone, but may cause bugs when composed with other optimizations. Some optimizations only preserve semantics when composed with others. We select two representative kernels to apply specifications of HLS optimizations: two-conv for two back-to-back 2D convolution kernels, binary-conv for a binary convolution kernel on 4D input tensors. The input size of the two-conv kernel is 8×8, and both convolution kernels are 3×3. The binary-conv input size is \((N, C, H, W) = (1, 3, 8, 8)\) and the convolution kernel dimension is \((OC, IC, K, K) = (2, 3, 3, 3)\). Tab. 2 shows the verification results on optimizations, including the number of CDAG nodes and running time. We discuss each specification with cases listed in the table. FIFO stream and line buffers. The original two-conv program has an array to store the intermediate result. The second convolution kernel reads the intermediate array in a sliding window. Case T.1 adds a line buffer to the second convolution kernel to increase data reuse and serialize the data access. Case T.2 simply replaces the intermediate array with a streaming FIFO. Without a line buffer in the second kernel to serialize its data access, this optimization causes an access pattern violation. Case T.3 implements the correct composition of both optimizations. Loop reorder and reuse buffers. Some HLS optimizations make certain assumptions about the program, and further optimizations that break the assumptions can cause bugs. For example, reuse buffer insertion assumes a certain loop order to load correct data. In case B.1, we verify that loop reordering from channel-first (NCHW) to channel-last (NHWC) does not change program semantics. In case B.2, we first insert a reuse buffer at height \((H)\) loop to create a line buffer, then insert another reuse buffer at width \((W)\) loop to create a window buffer, and we verify that inserting reuse buffers does not cause bugs either. However, when we apply reordering after reuse buffer insertion, the output channel \((C)\) loop is moved inside the width \((W)\) loop, causing line buffer load repeated input rows. As shown in Tab. 2 case B.3, our tool detects this issue caused by optimization dependency violation. Layout transformations. We transform the memory layout of the input multi-dimensional array from channel-first (NCHW) to channel-last (NHWC) in case B.4. Memory layout transformation benefits access locality, but changes the program semantics. Our tool correctly reports the difference in case B.4. 7.5 Detecting Simulation Mismatches C simulation can miss critical issues such as over-bound array access, which leads to hard-to-debug issues and may only be discovered during RTL co-simulation. This case study demonstrates how our tool efficiently finds memory partition bugs that C simulation does not uncover. Lst. 2 shows such an example: applying array partitioning on a loop kernel with over-bound array access. Since C/C++ stores arrays in contiguous memory, the over-bound array access \(A[1][3]\) overflows to the next row \(A[1][4]\). Such over-bound access does not happen in the synthesized RTL design. Our tool symbolically evaluates the array index expression. For array partitioning, it creates separate arrays for each subarray, and treats the original array as a live-in variable. Therefore, it We typically verify functions such as GEMM at a rate of about 0.5 which can support the same set of programs and/or transformations, ipulating complex programs with a rich transformation coverage. the equivalence of expressions, in restricted contexts, have been formations supported. In general, numerous approaches to prove typically limited in applicability, that is the space of program trans- approaches have been developed, e.g. [20, 39] however they are loop bounds, and proves equivalence between a pair of programs. our framework can prove the equivalence of C-style programs un- and more importantly, significantly larger space being consumed due to the bookkeeping of memory snapshots when workers change status. 64x64 shows limits in memory usage. 7.6 Verification Time and Scalability We typically verify functions such as GEMM at a rate of about 0.5 million statements per second, amounting to about 0.8MFlop/s, for problem sizes of 5002 and less. The process of proving 2 CDAGs equivalent is typically a negligible factor in the total time, and time is dominated by the number of instructions to interpret. Limits are dominated by memory usage: the CDAGs grow in space con- sumption linearly with the number of operations, with O(500M) FLOPs in reductions using 50GB. Future work includes run-time pression of CDAGs for increased scalability. Note however as illustrated in Sec. 7.1, for concurrent verification there are significantly more instructions to execute due to the check- semaphore/update-semaphore operations that need to be executed, and more importantly, significantly larger space being consumed due to the bookkeeping of memory snapshots when workers change status. 8 RELATED WORK Our framework can prove the equivalence of C-style programs under a wide set of code transformations, albeit limited to the class of Statically Interpretable Control-Flow. We are not aware of any tool which can support the same set of programs and/or transformations, but numerous prior works address a similar problem. PolyCheck is a system to prove equivalence of an affine program and its trans- formed variant via dynamic analysis [6] and supports "arbitrary" iteration reordering transformations. It is however fundamentally limited by the need to find a matching between statements in both programs, preventing it from supporting statement transformations, as well as data/storage transformations. In contrast, our framework does not require the programs to be affine, and supports a vastly richer class of transformations. ISA [1, 42] supports parametric loop bounds, and proves equivalence between a pair of programs. However it remains highly limited in transformation coverage [6], preventing its deployment for complex HLS optimizations. Other approaches have been developed, e.g. [20, 39] however they are typically limited in applicability, that is the space of program trans- formations supported. In general, numerous approaches to prove the equivalence of expressions, in restricted contexts, have been developed, including using equality saturation [44, 45], however the computational complexity of such approaches prevents manipulating complex programs with a rich transformation coverage. Deep learning methods have also been proposed to handle equiva- ence under a rewrite rule system, but the approach only handles programs of a few hundred nodes, without loops [24]. An approach complementary to ours is KLEE [2, 38]. It also uses a form of interpretation, to build a set of feasible execution paths for a program, discovering invariants and proving equivalence of complex programs, including pointer-based data structures. Com- p lex techniques have been built to discover equivalence between programs including to verify processors, e.g., [5, 23]. Preliminary extensions for floating point support have also been developed [30], however they do not scale to the problem sizes nor program com- plexity we manipulate. KLEE implements a different symbolic inter- pretation approach, ours is specialized for equivalence of programs with a single concretely interpretable CFG path and concretely interpretable array subscripts, in order to trade-off generality for speed. We limit coverage to fixed problem sizes (which are highly relevant in HLS-based designs), but can operate at order(s) of mag- nitude faster speed than KLEE due to our linear complexity for CDAGs construction and equivalence checking, as well as operat- ing on C semantics. It therefore very significantly widens the class of programs supported for equivalence checking in feasible time. Other approaches to check the correctness of a program transform- function include translation validation [33], including for specialized languages like Halide [8], and HLS [7, 21]. Vericert is a verified HLS framework [17], akin to CompCert [29]. We target source-to- source equivalence, outside of a compiler framework, making our tool agnostic to the optimization framework used, and supporting user-implemented code transformations. Testing is also a classical approach, typically checking the output data produced by two pro- grams is identical, but no proof is guaranteed. Finally, co- simulation is often performed before final deployment to ensure a design’s cor- rectness [19], but as shown in Sec. 7 this execution-based approach may miss bugs that our framework can detect. 9 CONCLUSION Proving the equivalence between two different implementations of the same program provides a verification of correctness of the optimizations for HLS implemented by either humans or tools. Focusing on source-to-source transformations for HLS and imposing sensible restrictions on the programs supported, we have developed a framework that can prove that the result of applying numerous fundamental optimizations, such as data buffering, FIFO-based com- munications and arbitrary loop transformations, preserves the exact semantics of the original program. Our framework can also verify the correctness of advanced transformations, such as those imple- mented by an automatic systolic array generator. However, this comes at the cost of restricting the class of programs supported statically-interpretable control-flow programs, which typically requires known problem sizes at verification time. ACKNOWLEDGEMENTS – This work was supported in part by an Intel ISRA award; U.S. NSF awards #1750399 and #2019306; ACE, one of seven centers in JUMP 2.0, an SRC program sponsored by DARPA; and Grant PID2022-136435NB-I00, funded by MCIN/AEI/10.13039/501100011033 and by “ERDF A way of making Europe”, EU. We are particularly thankful to Jin Yang, Jeremy Casas, and Zhenkun Yang from Intel for their support and guid- ance on the ISRA project. We also thank Lana Josipović and the anonymous reviewers for their feedback on earlier versions of this manuscript. REFERENCES [34] Louis-Noël Pouchet and Emily Tucker. 2023. PAST: the PoC AST Library, version 0.7.2. https://sourceforge.net/projects/pcc/files/pcc/lib/0.7.2.tar.gz
{"Source-Url": "https://www.csl.cornell.edu/~zhiruz/pdfs/hls-verify-fpga2024.pdf", "len_cl100k_base": 13493, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38920, "total-output-tokens": 16506, "length": "2e13", "weborganizer": {"__label__adult": 0.00043845176696777344, "__label__art_design": 0.00037026405334472656, "__label__crime_law": 0.0003528594970703125, "__label__education_jobs": 0.0006031990051269531, "__label__entertainment": 7.289648056030273e-05, "__label__fashion_beauty": 0.0002104043960571289, "__label__finance_business": 0.0002467632293701172, "__label__food_dining": 0.0003845691680908203, "__label__games": 0.0009007453918457032, "__label__hardware": 0.002918243408203125, "__label__health": 0.0005688667297363281, "__label__history": 0.00029158592224121094, "__label__home_hobbies": 0.00012218952178955078, "__label__industrial": 0.0006089210510253906, "__label__literature": 0.00022995471954345703, "__label__politics": 0.0003249645233154297, "__label__religion": 0.0006422996520996094, "__label__science_tech": 0.043975830078125, "__label__social_life": 7.462501525878906e-05, "__label__software": 0.0047607421875, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.0003724098205566406, "__label__transportation": 0.0007925033569335938, "__label__travel": 0.00023376941680908203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68242, 0.022]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68242, 0.23088]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68242, 0.88121]], "google_gemma-3-12b-it_contains_pii": [[0, 5345, false], [5345, 11334, null], [11334, 17223, null], [17223, 23734, null], [23734, 27639, null], [27639, 34519, null], [34519, 41524, null], [41524, 48018, null], [48018, 53988, null], [53988, 60833, null], [60833, 68242, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5345, true], [5345, 11334, null], [11334, 17223, null], [17223, 23734, null], [23734, 27639, null], [27639, 34519, null], [34519, 41524, null], [41524, 48018, null], [48018, 53988, null], [53988, 60833, null], [60833, 68242, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68242, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68242, null]], "pdf_page_numbers": [[0, 5345, 1], [5345, 11334, 2], [11334, 17223, 3], [17223, 23734, 4], [23734, 27639, 5], [27639, 34519, 6], [34519, 41524, 7], [41524, 48018, 8], [48018, 53988, 9], [53988, 60833, 10], [60833, 68242, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68242, 0.04604]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
04918031f4359d8e039a11c6ff08c7b038ab0731
Register Allocation for General Purpose Architectures Peter Keyngaert Report CW281, March 2000 Katholieke Universiteit Leuven Department of Computer Science Celestijnenlaan 200A – B-3001 Heverlee (Belgium) Register Allocation for General Purpose Architectures Peter Keynnaert Report CW281, March 2000 Department of Computer Science, K.U.Leuven Abstract Over the years, register allocation has been recognized as one of the vital problems to solve with respect to compiler design. Putting critical data values and/or addresses into the small but very fast memories that registers are, is essential for achieving high-speed code. This document gives an overview of the most important work in the field, detailing some breakthrough approaches while briefly mentioning a lot of results that were derived from them. Register Allocation for General Purpose Architectures Peter Keynnaert Department of Computer Science Katholieke Universiteit Leuven B-3001 Heverlee, Belgium peter.keynnaert@cs.kuleuven.ac.be Abstract Over the years, register allocation has been recognized as one of the vital problems to solve with respect to compiler design. Putting critical data values and/or addresses into the small but very fast memories that registers are, is essential for achieving high-speed code. This document gives an overview of the most important work in the field, detailing some breakthrough approaches while briefly mentioning a lot of results that where derived from them. 1 Introduction A good register allocator is an important part of today’s compilers. This section illustrates why this is so and introduces some vital terminology the reader should be familiar with. 1.1 Goals and terminology The hierarchical structure of the memory model of a load/store architecture as shown in figure 1 plays a crucial role in the execution speed of a program run on such a computer. As the capacity of the different memories increases, so decreases their speed. This implies that generating optimal code (where optimality is defined with respect to the execution time of the program) requires optimal use of the fastest memory type which, in the case of the type of architectures considered here, is the register set. Current techniques for automatic register allocation are far from generating optimal code in the strictest sense of the word (since the register allocation problem is known to be NP-complete, see e.g. [14] or [59] where it is stated that “given a program and an integer k, determining if the program can be computed using k registers is polynomial complete” and “register allocation belongs to a class of problems for which there exists no nonenumerative solution” respectively), but it should be obvious that even so, heuristics for good use of the register set are key elements when optimizing code for speed. Both CISC and RISC machines benefit from a good register allocation: in the instruction set of the former, instructions that operate on registers are faster than the ones that do not, while the latter has a lot of operations that can only be performed by instructions where all operands are in registers. Goals The goal of register allocation is to try to minimize to number of cpu accesses to the cache or to the main memory by putting as much data as possible in registers. Alternatively, this can be restated as follows: register allocation tries to maximize instruction level parallelism by putting operands of instructions that can be executed concurrently, into registers. liveness *Liveness* is an important concept within the context of register allocation. Intuitively, a variable is *live* if it holds a value that may be needed in the future. As such, a variable \( v \) is live at a program point \( p \) if all of these conditions hold: - \( v \) has been defined in a statement that precedes \( p \) in any path - \( v \) may be used by a statement \( s \) and there is a path from \( p \) to \( s \) - \( v \) is not killed between \( p \) and \( s \) The *live range* of \( v \) is the interval \([p_l, p_d]\) with \( v \) being defined by statement \( p_l \) and \( v \) last used in statement \( p_d \). **domain** Since load and store operations, as well as address calculations, need to be explicitly modeled, the domain to work in will be the one of either low-level intermediate code or assembly language. **register allocation vs. register assignment** Also, note the difference between register allocation and register assignment. Register allocation is the selection process of those variables that will reside in registers at a certain point in time during the execution of the program. The subsequent register assignment phase then picks the specific register that an allocated variable will reside in. This separation of concerns is based on what John Backus stated during the development of the first Fortran compiler, i.e. that the optimization of subscript expressions should be handled separately from the allocating of index registers. Register assignment can be trivial in some architectures, but in others it is not since there may be different register sets for e.g. integers and floating point numbers. **1.2 Overview** Section 2 of this document gives an overview of the existing methods that handle register allocation at the different levels of abstraction of a program where the technique is relevant. In section 3 a brief introduction to instruction scheduling shows its connectivity with register allocation. Also, a summary of various efforts to integrate both techniques is presented and register windows are mentioned. In section 4 some pointers to optimal register allocation techniques are given. As a general note for this document: while the basic methods for doing register allocation are presented up to a certain detail, the variations upon them are mentioned only very briefly. 2 Register allocation at different levels Register allocation can be applied at various levels during code generation: at the expression level, at the local level (i.e. for a basic block\(^1\)), at the global level (for a procedure) and finally at the interprocedural level (for the entire program). 2.1 Register allocation at the expression and/or basic block level Given an expression\(^2\), how can it be evaluated as fast as possible? If the assumption is made that each instruction takes 1 cpu cycle, the quest is to find the shortest sequence of instructions that does the job. To access data in memory, load and store instructions exist. Since the goal is to minimize the number of instructions of the expression evaluation routine and most binary mathematical operations have either two registers or a register and an address in memory as their source operands, this really is a register allocation problem also: the more values that are kept in registers, the less instructions will be generated and hence the faster the code will be. 2.1.1 For DAGs that are trees In [60] a now classic algorithm is given that solves the above problem for expressions without common subexpressions. The algorithm is designed for a machine with an unlimited storage capacity and \(N \geq 1\) register(s) which has the following 4 commands: 1. \(C(\text{storage}) \rightarrow C(\text{register})\) 2. \(C(\text{register}) \rightarrow C(\text{storage})\) 3. \(\text{OP}[C(\text{register}), C(\text{storage})] \rightarrow C(\text{register})\) 4. \(\text{OP}[C(\text{register}), C(\text{register})] \rightarrow C(\text{register})\) where the first one is a load, the second one a store, the third one a binary operation where the first operand is in a register, the second operand resides in memory and the result goes into a register and the fourth is a binary operation where both operands are in registers and the result goes into a register also. A program \(P\) is a sequence of operations (instructions) which evaluate an expression for which the initial values are stored in specific locations in memory. Because of the assumption about the execution time of instructions (cfr. supra), the cost of \(P\) is the number of program steps (instructions) needed to evaluate the expression. \(P\) must terminate with the result of the expression’s evaluation in some register. The basic idea is to label each node \(\eta\) in the binary graph \(G\) that represents the expression (as an example, the graph for the expression \(a/(b+c)-d*(e+f)\) is given in figure 2) and afterwards have an algorithm, that uses this labeled tree as its input, produce a minimal sequence of instructions. A node’s label then, is the minimal number of registers required to evaluate that node (i.e. generate code to evaluate the expression the binary tree with this node as its root stands for) without the use of any store instructions. In fact, not 1 but 3 algorithms are given in the paper, as the initial algorithm only works for --- \(^1\) [2] defines a basic block as a sequence of consecutive statements in which flow of control enters at the beginning and leaves at the end without halt or possibility of branching except at the end \(^2\) In this context, an expression is a sequence of binary operations on arguments where the arguments can either be initial (defined externally) or intermediate (defined by operations on other arguments). Expressions can be represented by binary trees where the leaf nodes correspond to initial values and the interior nodes correspond to intermediate values, their left and right descendants respectively representing the first and second operand of the operation. non-commutative operations and is then extended, first to also allow commutative operations and finally to a version that handles (a form of) associativity as well. Also, the extra assumption is made that there are no non-trivial relations between operations and the elements they operate on. For simplicity/clarity, only the basic algorithm will be presented here. 1. LABELING PHASE \( \forall \eta \in G : \text{define label } L(\eta) \text{ from the bottom up, according to these rules:} \) L1 If \( \eta \) is both a leaf node and a left descendant of it’s ancestor then \( L(\eta) = 1 \). If \( \eta \) is both a leaf node and a right descendant of it’s ancestor then \( L(\eta) = 0 \). L2 If \( \eta \) has descendants with labels \( l_{left} \) and \( l_{right} \) then \( L(\eta) = \max(l_{left}, l_{right}) \) if \( l_{left} \neq l_{right} \) and \( L(\eta) = l_{left} + 1 \) if \( l_{left} = l_{right} \). 2. CODE GENERATION PHASE Let there be \( N \) registers \( r_1, \ldots, r_N \). The following algorithm is applied to the root node of \( G \): - Let the current node be \( \eta \) with \( L(\eta) > 0 \) and registers \( r_{m}, \ldots, r_N \) available \((1 \leq m \leq N)\). \( L(\eta) = 1 \) If \( \eta \) is a leaf node then it must be a left descendant so generate \( r_m = \eta^3 \) so we generate \[ C(\text{left descendant of } \eta) \rightarrow C(r_m) \] If \( \eta \) is not a leaf node, it’s right descendant must be a leaf node (otherwise \( L(\eta) \geq 2 \) would hold whereas here we have \( L(\eta) = 1 \)) so we assign to \( r_m \) the resulting value of the algorithm applied to the binary tree with \( \eta \) as its root node and generate \[ \text{OP}(C(r_m)\ , \ C(\text{right descendant of } \eta)) \rightarrow C(r_m) \] \( L(\eta) > 1 \) If \( l_1 \geq N, \ l_2 \geq N \) then apply the algorithm to the right descendant of \( \eta \) with registers \( r_{m}, \ldots, r_N \) available. Generate a store of the value of the right descendant of \( \eta \) \[ C(\text{storage}) \rightarrow C(\text{storage}) \] Assign the result of applying the algorithm to the left descendant of \( \eta \) to \( r_m \) with registers \( r_{m}, \ldots, r_N \) available and generate \[ \text{OP}(C(r_m)\ , \ C(\text{storage})) \rightarrow C(r_m) \] If \( l_1 \neq l_2 \) and at least one of \( l_1, \ l_2 \leq N \) then store the resulting value of applying the algorithm to the descendant of \( \eta \) with the highest value in \( r_m \) (with \( r_{m}, \ldots, r_N \) available), store the resulting value of applying the algorithm to the descendant of \( \eta \) with the lowest value in \( r_{m+1} \) (with \( r_{m+1}, \ldots, r_N \) available) and generate \[ \text{OP}(C(r_m)\ , \ C(r_{m+1})) \rightarrow C(r_m) \] There’s an exception to this rule: if the smallest label is 0, do not apply the algorithm because if the label is 0 then the value resides in main memory (storage)! Here’s an example that shows the expression tree (figure 2) and the generated code for the expression \( a/(b+c) - d*(e+f) \). The machine the code was written for has only 2 registers, \( r_1 \) and \( r_2 \). To evaluate the given expression, only one store (line 5) of a value into main memory (at \[ ^{3}\text{What we mean by } r_i = \eta \text{ is that the value at node } \eta \text{ is assigned to register } r_i \] location T) was needed: 1) \( d \rightarrow r_1 \) 2) \( e \rightarrow r_2 \) 3) \( r_2 + f \rightarrow r_2 \) 4) \( r_2 \ast r_1 \rightarrow r_2 \) 5) \( r_1 \rightarrow T \) 6) \( a \rightarrow r_1 \) 7) \( b \rightarrow r_2 \) 8) \( r_2 + c \rightarrow r_2 \) 9) \( r_1 / r_2 \rightarrow r_1 \) 10) \( r_1 - T \rightarrow r_1 \) ![Expression graph for \( a/(b+c) - d \ast (e+f) \)](image) In brief, the way commutative operators are handled is to select that member of the family of trees that compute the same value that minimizes: - the highest label in the tree - the number of major nodes - the number of minor nodes where a node is a major node if both of its descendants have labels \( \geq N \) and a minor node if it’s a leaf node as well as the left descendant of its immediate ancestor node. Instead of just do register allocation for an expression, it can also be done for an entire basic block. Farach and Liberatore ([24], [40]) show that this problem is NP-complete also. ### 2.1.2 For DAGs that are not trees [59] states that the problem of local register allocation is NP-complete when the DAG (Directed Acyclic Graph) that represents the expression is no longer a tree. As such, no single algorithm exists that efficiently solves this problem. In [34], [32] and [33] an approach is illustrated that tackles this conclusion by the observation that several algorithms exist that perform well for some particular kind of DAGs. Their heuristic executes all these algorithms in parallel on the DAG subject to register allocation. The best result is then picked out of all the different solutions. As the DAGs typically encountered in real programs belong to a few simple classes, in practice this approach works quite well. Apart from some variations on depth-first strategies, they also introduce a randomized evaluation strategy: a random set of bit-vectors representing different orderings of the instructions in the DAG are generated and evaluated and the best one of them is used for allocation. The paper shows that this random approach has a very good chance of generating a solution that is close to optimal. A DAG $G(V,E)$ is contiguous if, when evaluating it, the following holds: $$\forall v \in V : \text{ord}(v) < \text{ord}(w) \to \text{ord}(v) < \text{ord}(w)$$ where $v_1$, $v_2$ are children of $v$, $w_1$, $w_2$ are children of $w$ and $\text{ord}(x) < \text{ord}(y)$ means that node $x$ is visited before node $y$. In other words, a contiguous evaluation of a node $v$ first visits the complete subDAG of a child of that node before it visits the rest of the subDAG with $v$ as a root. [36] and [35] note that almost all DAGs encountered in real programs are contiguous. The algorithm works by splitting the DAGs into trees and perform register allocation on these trees. 2.2 Register allocation at the procedure level 2.2.1 Chaitin’s now classic approach: register allocation as graph coloring The still widely used method of using graph coloring to perform global register allocation across an entire procedure was first presented in [17], although the idea was not entirely new as it was already hinted at in [3], [66] and [58]. The essential benefit of this technique is the uniform way in which all machine idiosyncrasies are integrated into the data structure and the systematic way of dealing with them. The problem of graph coloring can be stated as follows: given a graph $G$ and $n > 2$, is there a way to color the nodes of this graph using at most $n$ colors and having the restriction that no two nodes that have an edge between them have the same color. [1] and [25] shows that the decision whether such a coloring is possible is NP-complete. The implication of this result is that there will always be certain programs for which there will be serious coloring problems. The basic structure of Chaitin’s so called Yorktown allocator is illustrated in figure 3. The interference graph For each procedure in the program, a (register) interference graph is constructed and a coloring is obtained for it. Each node in the graph corresponds to a name. Normally, an unlimited number of symbolic registers is used in code generation before the actual register allocation is done. Each symbolic register is split into the connected components of its def-use chains\(^\text{4}\) and these components are called names. The benefit of this is that this splitting generates an additional coloring freedom since distant regions of the program are now uncoupled. Therefore, the mapping of symbolic registers onto names now transforms register allocation from a one-to-many mapping into a many-to-many mapping: first, we mapped a single, physical register onto a number of different symbolic registers. Now, we may map more than one physical register onto the same symbolic register (but in different parts of the code). Chaitin also uses a special notion of liveness: \(^{4}\)the def-use chain for a certain def(inition) of a variable is the list of all possible uses of that def(inition) A name $X$ is live at point $L$ in program $P$ $\iff$ $\exists$ a control flow path from the entry point of $P$ to a definition of $X$ and then through $L$ to a use of $X$ at point $U$ which means that there is no redefinition of $X$ on the path between $L$ and the use of $X$ at $U$: a name is live if it’s computed and used before it’s recomputed. As for the notion of interference, two nodes are said to interfere if their corresponding names are ever live simultaneously. Now: **IF** at a point in $P$: $\exists k$ live names $N_i$ ($1 \leq i \leq k$) **THEN** $k(k - 1)/2$ edges are added to the interference graph $G$. Adding so many edges is not necessary though: **IF** $k$ names $N_i$ ($1 \leq i \leq k$) are live at the definition point of another name $N'$ **THEN** we add the $k$ interferences $(N_i, N')$ to the graph. so the actual notion of interference is: two names interfere if one of them is live at a definition point of the other. This is a better definition for 2 reasons: first, building the interference graph is less work now (we only add $k$ edges instead of $k(k - 1)/2$) and second, there exist programs for which the resulting interference graph has a smaller chromatic number$^5$. The notion of interference as it is defined here indeed makes it possible to handle machine idiosyncrasies with respect to register allocation in a uniform way: e.g. if register R0 is not allowed to be the base register in a load instruction, just have all names that are used as a load base $^5$The chromatic number $\chi$ of a graph is the minimal number of colors needed to color that graph. register interfere with the node that corresponds to R0. Also, if, as a side effect, a call destroys the contents of certain machine registers, make all names that are live across the call interfere with all registers whose contents are destroyed. **Processing the interference graph** Processing the interference graph happens in 3 stages: 1. build the interference graph G cfr. supra 2. coalesce the nodes in G (i.e. force some nodes to have the same color and hence be assigned the same register later on) to create a new interference graph G' 3. attempt to construct an n-coloring of G' The final step could be done using backtracking to find an n-coloring if one exists, but this might result in using exponential amounts of time so it is best to limit this to a fixed (and small) number of iterations. **Node coalescing** Node coalescing (which is sometimes also called subsumption) is the process of taking 2 nodes n_i and n_j that don’t interfere and replacing them by 1 node n_k that interferes with all nodes n_i and n_j interfered with. E.g. register subsumption may benefit from this: a `move R0,R0` instruction can be omitted from the code, so if a `move Rx,Ry` instruction is present, we may try to coalesce the nodes corresponding to Rx and Ry and eliminate the instruction. However, this (again) requires the definition of interference to be altered/refined: the target of the move instruction doesn’t necessarily have to be allocated to a different register than it’s source. Therefore, a `move Rx,Ry` instruction at a point where Rx and k names N_i are live only yields k interferences (i.e. edges in G) of the form (T,N_i): an edge (T,S) is not added to G. Coalescing makes it possible to reduce the coloring of a graph G to the coloring of a simpler graph G'. **Representation of the interference graph** Three operations are defined on the interference graph: 1. building the graph 2. coalescing nodes 3. coloring the graph This implies that we need to access the nodes in the graph both randomly (determine whether 2 names interfere) and sequentially (go through a list of names that interfere with a given name). Operation 1 needs random access, operation 3 needs sequential access and operation 2 needs both. To satisfy both needs, 2 datastructures are typically used to represent the graph. Random access is very well supported by a bit matrix while sequential access is very easy when adjacency lists are used. --- 6The terms register, node and name are used loosely in this context to avoid the overuse of “corresponding to”. 7Rx is the target register, Ry is the source register. 8A bit matrix for a graph with n nodes N_i(1 ≤ i ≤ n) is an N × N matrix M where M(i,j) = 1 ⇔ nodes N_i and N_j interfere and M(i,j) = 0 ⇔ nodes N_i and N_j do not interfere. 9For every node there is a linked list of all the nodes it interferes with. Spilling Let $\chi$ be the chromatic number of $G$. If $\chi > n$ then the graph can’t be colored with $n$ colors. With respect to register allocation, this means that it is not possible to generate code for the given program using only $n$ registers while every name is allocated to a register: some names will not be allocated to a register and for these, store and load instructions will have to be added to the code. The refusal of allocating a register to a name is referred to as spilling that name. The added store and load instructions are called spill code. When spilling is needed, the major issue to be resolved is deciding which name should be spilled. The heuristic used in [17] uses the concept of register pressure: the pressure on the registers at a point $U$ in the program $P = \sum_{i=1}^{n} l_i$ of live names at $U + \sum_{i=1}^{n} \text{number of machine registers unavailable at } U$. Adding spill code should decrease register pressure. Two rules are given according to which spill code is added: Rule 1 if a name is spilled, insert a store at each of its definition points Rule 2 pass-throughs\(^{10}\) are reloaded by putting a load at the entry of each basic block $B$ for every name that is live at the entry to $B$ and is not spilled within $B$, but is spilled in some basic block that is an immediate predecessor of $B$. These simple rules sometimes insert unnecessary spill code, but this is later on eliminated by a dead code elimination process. Another interesting idea is rematerialization which uses recomputation as an alternative to the insertion of spill code: load instructions that can’t be collapsed can be replaced by a recomputation that directly leaves the result of the computation in the desired register. The main rationale behind this is that sometimes it’s cheaper to redo a computation than to store its value in memory and retrieve it again when needed. An example Figure 4 shows how Chaitin’s technique basically works. No spilling is needed in this example and a coloring with 3 colors (which is a minimal coloring) is found. In the figure, the target number of registers (colors) is represented by $k$ with $k = 3$. In the coloring step (represented in the right part of the picture) the 3 colors are indicated by the labels $c_1$, $c_2$ and $c_3$ next to the nodes. Figure 5 illustrates that the technique can not handle all graphs: although in practice the coloring results are very good, even some simple graphs can not be colored in an optimal way. 2.2.2 Chow and Hennessy’s priority based graph coloring algorithm In [18], [19] another approach to graph coloring is presented, based on the observation that a good allocation should use a cost/benefit analysis, hence the name priority based. During the allocation process, the unconstrained live ranges are never considered: live ranges that interfere with less other live ranges than there are registers will ultimately always be colorable. Thus, the algorithm needs only to be concerned with constrained live ranges. The 3 steps of the algorithm are repeatedly applied, coloring one live range with each step, until either there are no more uncolorable live ranges left or there’s no color left that can be assigned to a still uncolored live range: 1. calculate the priority function $P(lr) = \frac{S(lr)}{N}$ (with $lr$ an unconstrained live range) if it was not calculated before. In this definition of $P(lr)$, $S(lr)$ is defined as follows: $S(lr) = \sum_{i=1}^{N} s_i \times w_i$ and $s_i = \text{LODF} \times u + \text{STRSAVE} \times d + \text{MOV\_COST} \times n$ with $i$ one of the $N$ live ranges. --- \(^{10}\) A pass through is a computation which is live at the entry to a code interval but which is neither used nor defined within it. Figure 4: Expression graph for $a/(b+c)-(d \times (e+f))$ Figure 5: Graphs that can be problematic for Chaitin’s heuristic units of \( lr \), \( u \) the number of uses of \( i \), \( d \) the number of defines of \( i \), \( n \) the number of register moves needed and \( \text{LOD SAVE}, \text{ST R SAVE} \) and \( \text{MOV CST} \) the saves/costs in terms of cpu cycles. If \( P(lr) < 0 \) or \( lr \) is uncolorable, mark \( lr \) as a variable that will not reside in a register. \( lr \) is then deleted from the conflict graph. 2. Find \( lr^* \) being the live range with the highest priority function value. Assign a valid color to \( lr^* \). 3. For each live range \( lr \) interfering with \( lr^* \), check whether it needs to be split or not. \( lr \) must be split if all valid colors for \( lr \) are already assigned to one or more of its neighbours. 2.2.3 Improvements and variations on the graph coloring approach node coalescing Graph coloring removes copies by coalescing nodes: if the source and target of a copy instruction do not interfere in the conflict graph, these nodes can be treated as one single node. The coalescing heuristic in [17], [16] could make a graph uncolorable: every pair of nodes that is not connected by an edge in the interference graph is coalesced. Unfortunately, the new node being introduced is often so constrained that coloring of the resulting graph after coalescing is no longer possible, hence introducing spill code. [13] improves upon that heuristic by introducing a conservative approach that preserves graph colorability: if the node being coalesced has fewer than \( n \) neighbours with degree \( \geq n \) then coalescing is guaranteed not to turn a \( n \)-colorable graph into a non-\( n \)-colorable one. However, this approach is too conservative and leaves many opportunities for coalescing open. [26] introduces a more aggressive approach since the graph is still \( n \)-colorable when a node has more than \( n \) neighbours with degree \( \geq n \): 2 or more neighbours may have the same color. The idea is to interleave the simplification heuristic from [17] and [16] with the conservative approach of [13]. The simplify and coalesce routines of the Yorktown allocator scheme are called in a loop, hence the name \textit{iterated coalescing}. The authors show that over 60% of all moves are removed by their coalescing scheme. [54] introduces a technique called \textit{optimistic coalescing} which is based upon the \textit{optimistic coloring} approach from [10]: coalesce nodes and when the coalescing decision turns out to be troublesome during coloring, split up the coalesced node into the set of original nodes again. This technique reduces the overall spill cost since nodes have a higher chance of being colored. live range splitting The basic algorithm assigns a variable to a register for that variable’s entire lifetime. By splitting the variable’s live range into 2 or more parts (and hence creating 2 or more variables out of the given one), some of these parts may be assigned a register while others may not. Possibly, all parts may be assigned a register, introducing only some register moves (which are cheap in terms of cpu cycle usage) as overhead. Live range splitting was described in e.g. [20], [18] and [37]. graph splitting [28] describes an approach that partitions the interference graph into subgraphs that are colored individually and later merged. Separation of the graph happens along \textit{maximal cliques}. A maximal clique is the the maximal graph for which every node is connected directly to every other node. Cliques are trivially colored: the required number of colors is the size of the clique, i.e. its number of nodes. rematerialization In an attempt to improve the spill code, a technique called \textit{rematerialization} was suggested by [17]. In [12] and [13] this is extended and handled more in depth. \textit{Rematerializa- tion has to do with the observation that, when choosing the least expensive way to accomplish a spill, it is sometimes cheaper to do a recomputation of the value than to store it in main memory and load it afterwards. Rematerialization is not just one technique but a set of techniques that accomplish the same goal mentioned above. These are some practical opportunities for rematerialization: - immediate loads of integer\(^{11}\) constants - computing a constant offset from the frame pointer or the static data-area pointer - loads from a known constant location in either the frame area or the static data-area - loading non-local frame pointers from a display\(^{12}\) The key to recomputation is that it should be possible to recompute the desired value in a cheap way from values and/or operands available throughout the procedure. While the approach in [17], [16] can only handle live ranges with a single value, [13] introduces a technique to handle live ranges with multiple values\(^{13}\) by splitting each live range into its component values (by constructing each procedure’s static single assignment (SSA) graph), tagging each value with rematerialization information and finally form new live ranges from connected values that have identical tags (after some form of constant propagation has been used to propagate the rematerialization tags). **optimistic coloring** In [10] and [13] a modification to the coloring algorithm of [17] is introduced, which is called optimistic coloring. Contrary to the approach in [17], nodes that are removed from the conflict graph and are assumed uncolorable are not spilled at once. Instead, they are put on the coloring stack just as the nodes for which coloring is possible. The rationale behind this is that some of the neighbours of this node may end up having the same color, making the over-constrained node colorable anyway and avoiding a needless spill code insertion. This optimistic approach is illustrated on an example that can not be colored successfully by [17] in figure 6. An early version of such an optimistic variation on the standard coloring algorithm was already presented in [45], but that algorithm only finds a coloring: there is no notion of finding a \(k\)-coloring for a given \(k\) and no mechanism for inserting spill code. **Coloring register pairs** Some architectures require e.g. double-precision floating point values to be kept in adjacent (and sometimes also aligned\(^{14}\) register pairs. [17] suggests that such a machine idiosyncracy can be handled by the graph coloring scheme it presents. However, in [11] it is noted that the standard coloring heuristic in [17] often over-estimates the register demand, resulting in more spilling than required. Limited ways to handle this problem were described in [30] (introducing a method to handle the register pair constraints that arise in the shift instructions on the ROMP microprocessor) and [50] (a method for allocating structures into an aggregate set of adjacent registers that produces good results when combined with the allocator described in [10]). In [11] it is shown that the optimistic coloring approach naturally avoids the over-spilling problem: a node is only spilled when no register pair is available when selecting a color for the node under consideration. \(^{11}\)or, on some machines, floating point \(^{12}\)the list of stack frame pointers the CPU copies into the new stack frame from the preceding frame is sometimes called the display. The first word of the display is a pointer to the last stack frame. \(^{13}\)i.e. a live range with multiple definitions of the same variable: there is a clear distinction to be made between a value and a live range \(^{14}\)the first register of an adjacent pair of registers should have an even number 12 Figure 6: Brigg’s heuristic applied to a graph Chaitin’s heuristic can’t color with 2 colors 2.3 Register allocation at the program level Contrary to other register allocation methods, global methods take the control flow of the target program into account. 2.3.1 Register allocation by graph fusion Every Yorktown allocator variant treats live range splitting, live range spilling and register assignment during different phases of the process and does this repeatedly since it uses procedures as its objects to work on, not considering what happens to live ranges over procedural boundaries. By building interference graphs for all regions (a region can be any combination of basic blocks) and then fusing these graphs into one global interference graph, a better allocation can be achieved, especially since either static estimation or profiling is used to guide the decisions. This guidance can help integrate register allocation into the total optimisation framework: it is now known where the most critical/optimized parts of the program are, so register allocation may be treated differently there to get better performance, not letting decisions from less important parts spoil the decisions in the critical part of the code. The idea is to delay coloring of conflict graphs of a procedure until more of the program is known, i.e. until the global conflict graph has been constructed. This is exploited in [44], the idea of regions was introduced in [29]. In the graph fusion approach the partial conflict graphs are merged along control flow edges by means of a fusion operator. This fusion operator maintains the invariant that the resulting graph is simplifiable (i.e. colorable) if the merged graphs are simplifiable also. Spilling is delayed until this merging phase, where gradually more live ranges are spilled as more graphs are merged and hence more information becomes available. 2.3.2 Demand driven register allocation In this technique illustrated in [57], graph coloring is used to allocate registers but not to assign them, i.e. graph coloring decides which values get assigned a register but not which register specifically. To decide which values get assigned a register, some measurement is done regarding the costs and the benefits of that decision, as well as the estimate that, given an instruction and a variable that is assigned a register before that instruction, that variable will still be in a register after the instruction was executed. Local allocation is done first, since restricting the decision to a small part of the program enables a rigorous search for the optimal allocation. Afterwards, global allocation is done starting from the deepest nested loops and using both the cost/benefit and the estimate heuristics as a guide. The technique is promising but when very few registers are available, it can not be shown that the approach consistently beats the classic approach in [17]. An approach that uses a similar sequence of phases, is described in [38]. That approach produces code that is 25% slower compared to Chaitin-style allocation but the allocation process itself is almost 2.5 times faster. After the greedy local allocation phase, register assignment is done globally using a priority function: based on their use frequency as well as their place in the program, the candidates that were selected to be kept in a register in phase one are now assigned a specific register. 2.3.3 Linear scan register allocation An intraprocedural linear register allocation algorithm was proposed by [15]. Having a linear register allocator is interesting for compiler response issues, since typically graph coloring has a $O(n^2)$ complexity with $n$ the number of live ranges due to the construction of successive graphs. Linear scan register allocators have been introduced by both [56] and [63] (based on the binpacking algorithm described in [9]: registers are viewed as bins into which temporary life ranges are packed, taking into account the restriction that a bin can only contain one live range at a time) that are interprocedural. Although they do an only slightly worse allocating job than the register coloring variants, they are much faster as soon as \( n \) starts growing. A Linear scan allocator views liveness as a **livetime interval** and visits each lifetime interval in turn (according to its occurrence in the static linear code order). It considers how many other lifetime intervals are currently live also: that number represents the register pressure at that point, i.e. the need for registers. If the register pressure is bigger than the amount of registers available, a heuristic spills one (or more, if needed) of them to memory. ### 2.3.4 Register allocation at link-time As said, global program optimization is concerned with optimizing the entire program, across procedure boundaries. Large programs usually have their procedures structured in a collection of several modules and libraries that are linked together to create an executable. If the source code of every module and library is available to the compiler, optimizing an entire program can be done with (extensions of) the usual techniques. However, the source code is not always available (commercial libraries are typically already compiled when they’re delivered) or different modules might be written in different languages. The first time all the information available comes together is at link-time. Several link-time optimizers exist (e.g. [61], [49]) and their results show that having another optimization pass at link-time is very beneficial. Register allocation during this phase is also interesting, since calls from a procedure in one module to a procedure in another module prohibit a **good** allocation if both modules are not available to the compiler at the same time. If the compiler has to do register allocation for separate modules, two registers might use different registers for the same global, or the same register for different locals. Both of these problems disappear if registers can be assigned at link-time over the entire program. In [64] a technique is presented that uses compiler annotations as well as profile information to achieve this goal. The annotations the compiler makes allow for register allocation to be treated as a form of relocation: the linker rewrites each module based on the allocation. Given are the following annotations (for a three-operand instruction set): - **REMOVE.name**: remove the annotated instruction if \( name \) is assigned to a register - **OP1.name**: replace the first operand by the register \( name \) is assigned to - **OP2.name**: replace the second operand by the register \( name \) is assigned to - **RESULT.name**: replace the result operand by the register \( name \) is assigned to - **LOAD.name**: replace the load instruction of this annotation by a move instruction that copies \( name \) from the register it’s currently assigned to, to a temporary register - **STORE.name**: replace the store instruction of this annotation by a move instruction that copies \( name \) from the temporary register it’s currently in, to the register it was assigned to The first 4 annotations are straightforward, the latter 2 are somewhat more complicated. An exa- ample with the code generated for the C assignment \( x = y++ + z \) illustrates the use of \( LOAD \ name \) (as well as that of some other annotations): it is needed when the value of that name changes, while its original value is used further in the code. R1 := load y LOAD.y R2 := R1 + 1 RESULT.y store y := 2 REMOVE.y R2 := load z REMOVE.z R3 := R1 + R2 OP2.z RESULT.x store x := R3 REMOVE.x which, if x, y and z are all assigned to registers, becomes: R1 := y y := R1 + 1 x := R1 + z How is the code annotated? The code is seen as a sequence of commands. Each command is one of three types: 1. **leaf**: evaluate a single variable 2. **assign**: assign the value of a previous command to a variable 3. **operation**: perform an operation using the values of previous commands and produce a new result value Some commands must be marked as time critical ([62] presents an algorithm for doing so): the value used there is not always the current value of the variable. Once this marking is done, code can be annotated as follows: **Case 1**: if the command is a **leaf** for some variable \( v \) then generate a **load** into a temporary register\(^{15}\). If the leaf is marked **time critical** then annotate the instruction with \( \text{LOAD} \).\( v \) else annotate it with \( \text{REMOVE} \).\( v \). **Case 2**: if the command is a **operation**, generate the instructions to perform it (either 1 instruction or a sequence of instructions). If the result value of this operation is only used once (by an assignment command \( v := \), then annotate the instruction with \( \text{REMOVE} \).\( v \). **Case 3**: if the command is an **assignment** to some variable \( v \), generate a **store**. If the operand is a leaf command or if the operand is used in another command also, then annotate the instruction with \( \text{STORE} \).\( v \) else annotate it with \( \text{REMOVE} \).\( v \). For each instruction, if an operand is a leaf for some variable \( v \) and it is not marked as **time critical**, annotate all instructions that use that operand with either \( \text{OP1} \).\( v \) or \( \text{OP2} \).\( v \) (whether it is the first or the second operand). This technique gives remarkably good results: it does register allocation substantially better than the known compile-time algorithms. ### 3 Register allocation and its relation to other optimization techniques Register allocation is just one optimization technique within a large set of others that are applied one after the other. These techniques are not always independent, so the influence of other techniques on register allocation is an important research topic. For an overview of most of these techniques, excellent references are [2] and, more recently, [48]. --- \(^{15}\)Some registers are reserved for temporary use and as such can not be used for holding variables. 3.1 Instruction Scheduling Instruction scheduling is concerned with the reordering of program instructions to improve performance. This reordering can e.g. enhance ILP (instruction level parallelism) or reduce the amount of conflicts in the pipeline. 3.1.1 A brief introduction to instruction scheduling The most basic instruction scheduling techniques are branch scheduling and list scheduling which will be presented here as a brief introduction to the field. **Branch scheduling** Branch scheduling incorporates 2 related aspects: - filling the *delay slot(s)* after a branch instruction with useful instructions - covering the delay between performing a compare and being able to branch based on its result Many architectures have delayed branch or call instructions, i.e. one or more slots after the branch or call instruction can contain instructions that are executed while the branch/call is handled. This way, pipeline stalls are avoided because of the typical longer execution time taken by calls or branches. As whether a branch is taken or not often depends on a condition, the instructions in the delay slots are not always executed: the concept of nullification makes sure they are only effectively executed when the branch is taken and are not executed otherwise. Jumps, calls and branch always instructions (may) have delay slots that are always executed. Branch scheduling tries to fit appropriate instructions into these delay slots to maximize program performance. Some machines require some cycles to have passed between a condition-determining instruction and a branch instruction that depends on the outcome of the condition. If the required time has not passed by the time the branch is executed, the processor stalls at the branch instruction for the remainder of the number of cycles that must pass. By placing the compare as early in the schedule as possible, the potential number of cycles wasted can be minimized. **List scheduling** List scheduling is a basic block scheduling technique, i.e. its goals are to reorder the instructions within a basic block in such a way that the execution time of the basic block is minimal while it still produces the same result as originally intended. The instructions in the basic block under consideration are put into a DAG (direct acyclic graph). List scheduling initially traverses this DAG from the leaves towards the roots, labeling each node with the maximum possible delay from that node to the end of the basic block. In a second step, the DAG is traversed from the roots towards the leaves while selecting nodes to schedule and keeping track of the current time and the earliest time each node should be scheduled to avoid stalls. Since the problem is NP-complete, heuristics are applied to select the best instruction from the set of candidates for each time slot. 3.1.2 Integrating register allocation and instruction scheduling Register allocation and instruction scheduling are closely related since they both have an influence on each other’s results if one is done after the other. If register allocation is done first, instruction scheduling becomes a bit more constrained: you can not (always) reschedule instructions in such a way that previously non-overlapping live ranges suddenly overlap, because they may be assigned the same register. Vice versa, if instruction scheduling is done first, register allocation has a harder job to tackle as there may be more overlap between live ranges. As the question which phase must be done before the other has no answer, a lot of attempts have been made to integrate the 2 consecutive phases into 1 single phase that handles both problems or at least to make the first phase aware of the one to follow it. In [6] URSA, the Unified ReSource Allocator, is presented which allocates both functional units as well as registers to a VLIW (Very Large Instruction Word) architecture. Aware of the phase ordering problems described above, this approach comes up with a new set of phases where each phase has a minimal impact on the subsequent one. These phases are (1) the computation of resource requirements, (2) the identification of regions with excessive requirements, (3) code motion to reduce the (excessive) requirements and (4) resource assignment. In [7] and [8] this framework is described in more detail with respect to instruction scheduling vs. register allocation. In [5] URSA has evolved into GURRR, the Global Unified Resource Requirements Representation where the program dependence graph is augmented with resource requirement information. [51] avoids the increasing complexity of a full integration of the 2 phases by modifying the global register allocation phase, which comes first, to become scheduler sensitive. Since scheduling comes after register allocation, the inserted spill code will also be scheduled nicely. [52], by the same authors, inverts the order of that approach, making the scheduler register allocation sensitive. This way, it is hoped that the amount of spill code to be inserted will be minimal. The authors describe their further experiences with and conclusions from these schemes in [53]. In [55] register allocation is done based on the coloring of the parallelizable interference graph. This representation ensures no false dependencies are introduced and hence all the options for parallelism remain available to the scheduler. Heuristics are introduced to make a trade-off between the costs of register spilling versus the loss of instruction level parallelism. [47] shows that, for a basic block, the integration of register allocation and instruction scheduling into one problem (CRISP: Combined Register Allocation and Instruction Scheduling) is solved easier approximately than when using the classic graph coloring approach. An algorithm based on a heuristic is introduced, that would have been impossible to formulate outside of such a full integration. The algorithm is called the \((\alpha,\beta)\) - Combined Heuristic where \(\alpha\) is a measure of the register pressure and \(\beta\) is a measure of the instruction level parallelism. 3.1.3 Register allocation for loops Loops are considered a special case with respect to register allocation since live ranges may be spread over different, successive iterations of the loop. Since loops are often those parts of the program where most of the time is spent, both register allocation as well as instruction scheduling for loops are critical. **software pipelining** This is a loop scheduling technique that overlaps the execution of several consecutive iterations of the loop in an attempt to enhance parallelism in the loop body: instructions can be scheduled in such a way that instruction level parallelism is maximal. Obviously, some instructions have to be set up before and after the rescheduled loop body to ensure correctness. Unfortunately, software pipelining often increases register pressure, making register allocation even harder or less successful (see [43], [41], [42], [65]). **the Meeting graph** In [22], [23], [21] and [39] the meeting graph is introduced as an alternative for the conflict graph, which can be a circular-arc graph for loops. Two live ranges “meet” if one of them ends at the time the other one starts. Figure 7 illustrates the difference between the conflict graph and the meeting graph of a simple example. The labels next to the nodes of the meeting graph are the length of the live ranges they represent, expressed as the number of cycles. The meeting graph incorporates a notion of time which is not in a normal conflict graph and which is obviously needed when treating a loop body as live ranges can span more than one iteration of the loop. Figure 7: Both the conflict graph and the meeting graph for a small example 3.2 Register allocation vs register windows While the assignment of variables to registers in register allocation is handled by the compiler and is thus the responsibility of the software, there also exists a hardware mechanism that tries to achieve the same goal. If the hardware makes use of register windows, the register set (which is typically larger than the register set when there is no hardware support, i.e. when register allocation is used) is a circular register buffer. If a call occurs, the tail pointer is moved to allow a number (a window) of registers (either a fixed number or a number dependent on what is needed, this varies according to different architectures) to be used for the locals in the called procedure. If an overflow occurs, enough registers are spilled from the head of the buffer to create a new window at the tail. In [62] a study is presented that compares both approaches by measuring how well either register management scheme removes load and store instructions. Their actual measure is $MR = (M_1 + S)/M_0$ where - $M_0$ is the number of load and store instructions used to move variables between main memory and the register set when no register management scheme is used. - $M_1$ is the number of load and store instructions used for these variables when a register management scheme is in use. - $S$ is the number of load and store instructions that were added due to the use of the register management scheme. Note that $MR = 0$ is optimal while $MR > 1$ means the applied register management scheme actually made things worse. Also, some more traditional schemes are included in the tests. Their conclusion is that register allocation at link-time (if combined with profiling information) outperforms all the other software-based methods and was sometimes even better than the register windows approach. Register windows actually does a slightly better job in allocating, but since the method is hardware based this results in the need of sacrificing some chip real estate (for the extra registers) and a slight increase of the cycle time (because of the overhead). The article also mentions the difference between cooperative register allocation and self fish register allocation. In the former, each procedure sometimes doesn’t get all the registers it needs for its local variables, but procedures in the same call chain use different registers so spills and reloads are only needed for recursive calls or indirect calls (through procedure variables). In the latter, registers are allocated for the procedures separately so each procedure is able to use all the registers. Unfortunately, this means that registers used by the procedure must be spilled at its entry and must be reloaded when exiting the procedure. 4 Optimal register allocation? In spite of the NP-completeness of the register allocation problem, several attempts to achieve an optimal solution within a reasonable time have been made. Among the approaches used are dynamic programming ([46], focusing on optimal instruction scheduling but integrating some aspects of register allocation into that framework) and integer linear programming\(^6\) ([4], [27], [31]). Unfortunately, all of them are painstakingly slow. Even the most recent attempt in [4], where optimal register allocation is split into 2 phases (optimal spill code placement followed by optimal coalescing) is not satisfyingly fast: an efficient algorithm for the former phase is presented, an efficient algorithm for the latter phase is left as an open question (although both an inefficient optimal algorithm as well as an efficient suboptimal one are given). References --- \(^6\)which is linear programming where all coefficients are integers 22
{"Source-Url": "http://www.cs.kuleuven.ac.be/publicaties/rapporten/cw/CW281.pdf", "len_cl100k_base": 12318, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 117722, "total-output-tokens": 17644, "length": "2e13", "weborganizer": {"__label__adult": 0.0003681182861328125, "__label__art_design": 0.0004117488861083984, "__label__crime_law": 0.00032067298889160156, "__label__education_jobs": 0.0005726814270019531, "__label__entertainment": 6.264448165893555e-05, "__label__fashion_beauty": 0.00018262863159179688, "__label__finance_business": 0.00023651123046875, "__label__food_dining": 0.0004148483276367187, "__label__games": 0.0008935928344726562, "__label__hardware": 0.0033588409423828125, "__label__health": 0.0004870891571044922, "__label__history": 0.0003681182861328125, "__label__home_hobbies": 0.00013363361358642578, "__label__industrial": 0.0007200241088867188, "__label__literature": 0.00021982192993164065, "__label__politics": 0.00031280517578125, "__label__religion": 0.0006566047668457031, "__label__science_tech": 0.04046630859375, "__label__social_life": 5.316734313964844e-05, "__label__software": 0.005107879638671875, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.0003814697265625, "__label__transportation": 0.0008616447448730469, "__label__travel": 0.0002627372741699219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67871, 0.03714]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67871, 0.6707]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67871, 0.89173]], "google_gemma-3-12b-it_contains_pii": [[0, 209, false], [209, 819, null], [819, 3514, null], [3514, 5874, null], [5874, 9564, null], [9564, 12914, null], [12914, 14341, null], [14341, 17953, null], [17953, 19567, null], [19567, 22435, null], [22435, 26208, null], [26208, 26332, null], [26332, 30166, null], [30166, 33984, null], [33984, 34077, null], [34077, 37957, null], [37957, 41523, null], [41523, 44156, null], [44156, 47670, null], [47670, 51597, null], [51597, 53609, null], [53609, 56723, null], [56723, 59631, null], [59631, 62536, null], [62536, 65436, null], [65436, 67871, null]], "google_gemma-3-12b-it_is_public_document": [[0, 209, true], [209, 819, null], [819, 3514, null], [3514, 5874, null], [5874, 9564, null], [9564, 12914, null], [12914, 14341, null], [14341, 17953, null], [17953, 19567, null], [19567, 22435, null], [22435, 26208, null], [26208, 26332, null], [26332, 30166, null], [30166, 33984, null], [33984, 34077, null], [34077, 37957, null], [37957, 41523, null], [41523, 44156, null], [44156, 47670, null], [47670, 51597, null], [51597, 53609, null], [53609, 56723, null], [56723, 59631, null], [59631, 62536, null], [62536, 65436, null], [65436, 67871, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67871, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67871, null]], "pdf_page_numbers": [[0, 209, 1], [209, 819, 2], [819, 3514, 3], [3514, 5874, 4], [5874, 9564, 5], [9564, 12914, 6], [12914, 14341, 7], [14341, 17953, 8], [17953, 19567, 9], [19567, 22435, 10], [22435, 26208, 11], [26208, 26332, 12], [26332, 30166, 13], [30166, 33984, 14], [33984, 34077, 15], [34077, 37957, 16], [37957, 41523, 17], [41523, 44156, 18], [44156, 47670, 19], [47670, 51597, 20], [51597, 53609, 21], [53609, 56723, 22], [56723, 59631, 23], [59631, 62536, 24], [62536, 65436, 25], [65436, 67871, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67871, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
5ea2d80b99c3c1e4c337bbd7109d44771bbec216
[REMOVED]
{"Source-Url": "http://knowledge-representation.org/j.z.pan/pub/WCLC+2019.pdf", "len_cl100k_base": 10629, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 47768, "total-output-tokens": 14562, "length": "2e13", "weborganizer": {"__label__adult": 0.0004658699035644531, "__label__art_design": 0.0010385513305664062, "__label__crime_law": 0.0006837844848632812, "__label__education_jobs": 0.01059722900390625, "__label__entertainment": 0.00034928321838378906, "__label__fashion_beauty": 0.00033664703369140625, "__label__finance_business": 0.0014505386352539062, "__label__food_dining": 0.0004603862762451172, "__label__games": 0.0011281967163085938, "__label__hardware": 0.0008525848388671875, "__label__health": 0.0009069442749023438, "__label__history": 0.0009031295776367188, "__label__home_hobbies": 0.00024127960205078125, "__label__industrial": 0.0005712509155273438, "__label__literature": 0.0023326873779296875, "__label__politics": 0.0005288124084472656, "__label__religion": 0.0006985664367675781, "__label__science_tech": 0.408935546875, "__label__social_life": 0.0004036426544189453, "__label__software": 0.08404541015625, "__label__software_dev": 0.48193359375, "__label__sports_fitness": 0.0002720355987548828, "__label__transportation": 0.0006279945373535156, "__label__travel": 0.0002899169921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50506, 0.04311]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50506, 0.4577]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50506, 0.87729]], "google_gemma-3-12b-it_contains_pii": [[0, 2306, false], [2306, 5515, null], [5515, 8215, null], [8215, 10909, null], [10909, 12858, null], [12858, 16116, null], [16116, 18597, null], [18597, 21271, null], [21271, 24267, null], [24267, 27421, null], [27421, 30885, null], [30885, 33165, null], [33165, 36583, null], [36583, 39446, null], [39446, 41873, null], [41873, 45009, null], [45009, 48335, null], [48335, 50506, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2306, true], [2306, 5515, null], [5515, 8215, null], [8215, 10909, null], [10909, 12858, null], [12858, 16116, null], [16116, 18597, null], [18597, 21271, null], [21271, 24267, null], [24267, 27421, null], [27421, 30885, null], [30885, 33165, null], [33165, 36583, null], [36583, 39446, null], [39446, 41873, null], [41873, 45009, null], [45009, 48335, null], [48335, 50506, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50506, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50506, null]], "pdf_page_numbers": [[0, 2306, 1], [2306, 5515, 2], [5515, 8215, 3], [8215, 10909, 4], [10909, 12858, 5], [12858, 16116, 6], [16116, 18597, 7], [18597, 21271, 8], [21271, 24267, 9], [24267, 27421, 10], [27421, 30885, 11], [30885, 33165, 12], [33165, 36583, 13], [36583, 39446, 14], [39446, 41873, 15], [41873, 45009, 16], [45009, 48335, 17], [48335, 50506, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50506, 0.10373]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f61a35a5f8333c8524e66edf8b4ab418bc1e3fe0
Contrasting Classification with Generalisation Thomas Kühne School of Mathematics, Statistics and Computer Science Victoria University of Wellington PO Box 600, Wellington, New Zealand, Email: Thomas.Kuehne@mcs.vuw.ac.nz Abstract Classification and Generalisation are two of the most important abstraction mechanisms in modelling, and while they share a number of similarities, they are unmistakably different with respect to their properties. Recently, a number of (meta-) modelling language design approaches de-emphasised the differences between classification and generalisation in order to gain various advantages. This paper aims to demonstrate the loss in precision and the loss of sanity checks such approaches entail. After a careful comparison between classification and generalisation, I identify problems associated with the above mentioned approaches and offer alternatives that retain a strong distinction between classification and generalisation. Keywords: classification, generalisation, strict metamodelling 1 Introduction The main purpose of modelling is to reduce complexity. Sometimes it suffices to simply reduce the information per element in the universe of discourse but otherwise retain a one-to-one correspondence between these elements and model elements (i.e., creating a token model (Kühne (2006))). However, often there is a need to use more abstract views, in particular, to disregard particularities of individual elements and only capture the relevant universal properties, creating a many-to-one correspondence between elements from the universe of discourse and modeling elements. One thus obtains a way to characterise many individuals by referring to one representative only. In particular, classification is used to create types from instances, giving rise to type models (Kühne (2006)), abstracting away from the identity of instances and their different property values. On the other hand, generalisation is used to create a genus (hypernym) from species (hyponyms) (Rayside & Campbell (2000)), i.e., to create supertypes from subtypes, thus abstracting away from a number of subtypes and their respective differences. Today, the differences between classification and generalisation are well understood, but this has not always been the case. Before Frege laid the foundation for a modern axiomatic logic with his “Begriffsschrift” (Concept Script) in 1879, there was no systematic way to avoid mistakes arising from confusioning classification with generalisation and the logical fallacies that inevitably follow (see section 3). The arbor porphyriana (aka, “tree of knowledge”, a concept tree as inspired by Porphyrios) in the version of Puchotius from 1730 relates a supertype (e.g., “Animal”) with a type (e.g., “Homino”) in the same way as a type (“Homino”) with its instances (e.g., “Petrus”) (Puchotius (1730)). This lack of proper distinction between classification and generalisation could still be observed in 1890 (see Frege (1889, page 193, footnote 4)). As such, this relatively modern attainment should not be given up lightly, even when apparent advantages seem to suggest this from a pragmatic point of view. A number of (meta-)modelling language design approaches have been proposed that can be regarded as emphasising the similarities between classification and generalisation and de-emphasising their differences in order to yield - a simplified language design (Jackson (2006)), - more flexibility in metamodelling (Varró & Patarica (2003), Gitzel & Merz (2004)), and - an increased utility of a language specification (OMG (2004)). In this paper, I argue that there is value in maintaining a clear distinction between classification and generalisation, and that alternatives to the above mentioned approaches exist that maintain a clear distinction. Section 2 compares classification with generalisation pointing out similarities and fundamental differences. Section 3 analyses some of the aforementioned approaches and suggests alternative solutions. Section 4 concludes. 2 Comparison The characterisation of classification and generalisation in the introduction, as typically using instances and types as their domains respectively, suggests that these abstraction mechanisms serve very different purposes and indeed this is the case for most common usage scenarios. However, note that classification may also be performed on types (metamodelling, see Kühne (2006)) and it is possible to generalise at the instance level as well, leading to so-called abstract objects (Mittelstraß (1995)). In the following comparison, I am hence careful to compare classification with generalisation by applying them to the same domain in order to avoid observing discrepancies that only exist due to the application to different domains. is false since Linnaeus did not name all dogs, but the concept “Dog” (Canis lupus familiaris), i.e., the subspecies subordinate to species “Canis lupus”. It is of course true, though, that \( \{ \text{Lassie} \}_{\sim_{\text{dog}}} \) implicitly defines a set of instances (an equivalence class), which could be the extension of a concept—in particular, if the equivalence relation \( \sim_{\text{dog}} \) is defined by referring to a predicate \( \text{Dog}(X) \), i.e., instances are considered equivalent with each other if they satisfy predicate \( \text{Dog}(X) \). Yet, \( \{ \text{Lassie} \}_{\sim_{\text{dog}}} \) refers to all instances of that set, not the set itself. In other words, the concept “Dog” is not a collection of dogs. If one wants to introduce an alternative name to the notion of an abstract object like \( \{ \text{Lassie} \}_{\sim_{\text{dog}}} \), then prototype would best describe its nature. An abstract object captures what is universal about a set of instances but resides at the same logical level as the instances, much like an object in a prototype-based language from which other objects can be cloned (Ungar & Smith (1987)). It thus has the quality of a type but is not (yet) a type, hence “prototype”. Finally, note that the way we generalise from Lassie to \( \{ \text{Lassie} \}_{\sim_{\text{dog}}} \) conforms to how a number of special concepts may be generalised to a general one (see equation 3). A general concept can be regarded as capturing what is universal about its subconcepts. This is true with respect to instance facet properties, e.g., tax rate values, but also with respect to type facet properties, e.g., attributes that require certain properties for instances, such as “age : integer”. If an equivalence relation \( \sim_{\text{named}} \) considers all subconcepts to be equivalent that feature a “name” attribute and is used on a number of subconcepts, such as “Collie”, “Poodle”, and “Beagle”, then \( \{ \text{Collie} \}_{\sim_{\text{named}}} \) is the generalisation of these subconcepts and could be labelled “NamedDog”. Because a general concept \( C_g = |C|_{\sim} \) is derived from more specialised concepts \( C_i (i \in [1..n]) \) by disregarding differences in the \( C_i \), every \( C_i \) will at least have the requirements on instances that \( C_g \) has, which means that if an instance satisfies the requirements of a \( C_i \), it will also satisfy the requirements of \( C_g \). \[ \forall i, x : i(C_i)(x) \rightarrow \epsilon(C_g)(x) \] (6) Here, I am referring to the intensions of concepts since I want to emphasise the fact that a concept can be viewed as being independent from the instances it describes. A concept resides at a higher logical language level than the instances within its extension, and its intension can be used as a judge with respect to instances that may or may not fall under the concept. This view of concepts is particularly important if one wants to deal with dynamic extensions that may shrink/grow over time and it is the prevalent one in modelling languages such as the UML ((OMG (2007))) where a class / type is regarded as an intensional description of its instances. From equations 1 & 6 it follows that the extension of the superconcept is a superset of the union of the extensions of its subconcepts. \[ \epsilon(C_g) = \epsilon(|C|_{\sim}) \supseteq \bigcup_i \epsilon(C_i). \] (7) In equation 7, “\( \supseteq \)” can be replaced with “\( \sim \)”, if \[ \forall e : e \in C_g \rightarrow \exists i : e \in C_i, \] i.e., if there are no elements which are classified by \( C_g \) but not by any \( C_i \). The latter holds for all natural objects which always have proper type which is more specific than a generalised type, but may not be true in modelling or programming, where instances of generalised types may be created unless they are declared as being “abstract”. 2.1 Formalisation For the following comparison it is useful to introduce the notion of a “concept” as conceived by Frege. Using the terminology of his pupil Carnap (Carnap (1947)), a concept \( C \) has an extension \( \epsilon(C) \), all instances falling under the concept \( C \), and an intension \( \epsilon(C) \), a predicate characterising whether an element belongs to the concept or not, so that \[ \epsilon(C) = \{ x | \epsilon(C)(x) \} \] (1) We can now state whether a concept \( C \) classifies an instance \( e \), i.e., \( e \in \epsilon(C) \), using an extensional or an intensional viewpoint. Here’s the extensional variant: \[ e \in \epsilon(C) \iff e \in \epsilon(C). \] (2) Using the usual interpretation of generalisation, a concept \( C_i \) is more general than another concept \( C_g \), i.e., \( C_i \subset C_g \), if it includes all the instances falling under \( C_i \): \[ C_i \subset C_g \iff \epsilon(C_i) \subseteq \epsilon(C_g). \] (3) Equations 2 & 3 make it obvious that a direct comparison between classification and generalisation is hindered by the fact that the former’s domain typically consists of instances and the latter’s domain typically consists of concepts. One way to enable an adequate comparison would be to look at concepts \( C_i \) and \( C_g \) as instances, i.e., instead of considering their type facets (e.g., their attributes), by which they define the shape of their instances, one could consider their instance facets (e.g., their property values), that is, properties that are associated with the concepts themselves (Atkinson & Kühne (2003)). For example, for the purpose of modelling a pet store, the instance facet of the concept “Dog” could have a tax rate property with the value “16%” whereas the instance facet of concept “DogFood” could have the value “10%” for the same property. However, this way of looking at concepts is unfamiliar to most and would also imply that we had to use meta-concepts to classify concepts. Therefore, I perform the comparison at the instance level and use generalisation at the instance level by using the notion of an abstract object (Kamlah & Lorenzen (1996)). An abstract object represents all instances that are considered to be equivalent to each other for a certain purpose, e.g., \[ P(x_i) \iff \forall y : y \sim x \rightarrow P(y) \] The abstraction operator \( | \cdot | \) gives us a way to make a statement about all instances that are considered equivalent to each other. For example, while \[ \text{HasFourLegs}(\text{Lassie}) \] is true if the instance “Lassie” has four legs, the expression \[ \text{HasFourLegs}(\{ \text{Lassie} \}_{\sim_{\text{dog}}} \) (4) is only true if all instances considered equivalent to “Lassie” have four legs (by means of \( \sim_{\text{dog}} \), which here is meant to regard all instances of subspecies “Dog” to be equivalent with each other). Note that \( \{ \text{Lassie} \}_{\sim_{\text{dog}}} \) is not a type/concept. What we assert of \( \{ \text{Lassie} \}_{\sim_{\text{dog}}} \) is asserted of an instance (and all other instances that are equivalent to it). The expression \[ \text{NamedByLinnaeusIn1758}(\{ \text{Lassie} \}_{\sim_{\text{dog}}} \) (5) \footnote{The type of the left hand side element in a relation.} 2.2 Similarities When taking a sufficiently broad view, it is indeed possible to identify a number of similarities between classification and generalisation. 2.2.1 Abstraction Classification and generalisation can both be regarded as abstraction mechanisms. By abstracting away from individual detail they give rise to relationships that are typically many-to-one, i.e., many elements are abstracted from to yield one representative. By using the representative, i.e. the type or the generalisation, one is able to assert facts about a large number of elements, e.g. what relationships they may engage in, without referring to individual elements. Hence, both classification and generalisation help to reduce the complexity of specifications. 2.2.2 Membership Instances and subtypes can both be viewed as belonging to or being members of their respective representative. Each instance is a member of the set characterised by its type and each subtype is a member of the subtype hierarchy of which the generalisation forms the top. 2.2.3 Description Obviously, the representatives can be regarded as describing their members, i.e., the members are at least partially defined by virtue of their membership. Considering aggregation, as an example for another many-to-one relationship, that does not have such a descriptive flavour. The whole can be used as a representative of its parts, but does not describe the latter. 2.2.4 Reuse Finally, both types and generalisations can be usefully kept in libraries as they allow modellers to derive new elements from existing ones, as instances or subtypes respectively. They, therefore, both support reuse and incremental development in the sense that a modeller may reuse such library elements and only needs to specify what is different about the new element. 2.3 Differences While the previous section appears to suggest that classification and generalisation have a lot in common, it actually refers to rather superficial similarities which distract from the fundamental differences between them. 2.3.1 Abstraction Everything that has been stated in section 2.2.1 regarding the nature of classification and generalisation as abstraction mechanisms regarding the reduction of complexity can also be stated about aggregation, leaving only their descriptive nature and utility in libraries as differences. The vast differences between, say generalisation and aggregation, highlight what little significance a commonality in terms of “supporting abstraction” actually has. 2.3.2 Membership While types and generalisations may both be regarded as representatives, they are in fact at different logical language levels with respect to each other. In section 2.1, I used expressions 4 & 5 to demonstrate the difference between properties at the instance level and the type level. Note that when using an abstract object (a generalisation) we can directly assert a property as in expression 4. To achieve the same with a type, we actually need to use universal quantification as in \[ \forall x : \iota(Dog)(x) \rightarrow HasFourLegs(x). \] This universal quantification is often left implicit, using the pragmatic assumption that assertions are made about instances of a type, rather than the type itself (expression 5 being an example for the latter). Yet, the above must not detract from the fact that \(|C|_\sim\) refers to all elements within the equivalence class implied by \(\sim\), whereas \(\varepsilon(C)\) refers to the set of all elements, i.e., the equivalence class itself (given a corresponding \(\iota(C)\)). Hence, when associating meaning to \(\iota(C)\) and \(\sim\) by using a mapping \(\mu\), \(\mu(|C|_\sim)\) is multi-valued, i.e., here “\(\mu\)” is a relation, whereas \(\mu(C)\) has a single result, i.e., here “\(\mu\)” is functional. Assuming a stratification of values in which sets of objects rank higher than the objects they contain (see Russell’s Theory of Types (?) ) clearly, types are at a higher logical level than their instances and hence also at a higher level than generalisations of their instances. From the fact that the result of classification is a set (rather than all the members of the set), it follows that classification gives rise to a relation which is not transitive. While we have equation 6 for generalisation, and hence transitivity, i.e., \[ C_1 < C_2 \land C_2 < C_3 \rightarrow C_1 < C_3, \] for classification, obviously an element \(C_1\) with \(C_1 \in \varepsilon(C_2)\), need not be in an element in a set \(C_3\), even if \(\varepsilon(C_2) \in C_3\). Figure 1 uses a 3D variant of a Venn diagram to illustrate the fact that an element (Lassie) is automatically also an (indirect) member of the supertype of its type, but is not automatically a member of the type of its type. \[ C_1 \subset C_2 \land C_2 \subset C_3 \rightarrow C_1 \subset C_3, \] For classification, obviously an element \(C_1\) with \(C_1 \subset C_2\), need not be in an element in a set \(C_3\), even if \(C_2 \subset C_3\). Figure 1: Differences in transitivity In (Kühne (2006)), I argued that in contrast to classification, generalisation cannot be used to erect a metalevel hierarchy because of the transitivity of the relation it implies. 2.3.3 Description Referring to the previous section 2.2 again, it is true that types and generalisations both have a descriptive role. However, note that while a type typically only shapes the instance facet of its instances, i.e., controls its instances’ properties, a generalisation is typically only used to shape the type facet of its subordinated elements, i.e., supertypes are typically used to make subtypes inherit type facet features (such as the attribute “age: Integer”). It is, however, possible to influence the type facet with types (see “deep instantiation” (Kühne & Schreiber (2007))) and influence instance facets with specialisation (see Smalltalk “class variables” (Goldberg & Robson (1983))). 2.3.4 Reuse As a result of their different descriptive roles, types and generalisations have rather different purposes when used as library elements. Types provide a vocabulary that is used without being refined. There is no specification of a “difference” to the derivable element, but one simply provides values for the schema made available through the type. Generalisations, on the other hand, are refined when used by specifying what has to be added to derive a specialised element from a general one. In summary, although classification and generalisation share some superficial properties, they are intrinsically and unmistakeably different. 3 Analysis of Approaches In the following I examine a number of approaches that see benefits in de-emphasising the differences between classification and generalisation in one way or the other. 3.1 In the Name of Simplicity Alloy is a language for the specification of software systems (Jackson (2006)). One of the tutorials on Alloy contains the following statement: “Set membership and subsets are both denoted in.” (Seater & Dennis (2008)). In other words, Alloy uses one “in” operator for both “∈” and “⊆”. This is surprising at first because of the fundamental differences between these two relations. As discussed earlier on, “⊆” (corresponding to generalisation) is transitive, whereas “∈” (corresponding to classification) is not. This apparent puzzle is easily resolved by observing that Alloy does not fully support modelling at the instance level. The modeller is rather required to model instances as singleton sets, as in one sig Lassie extends Collie {} Here the set of all collies is specialised by a singleton set \( Lassie \) which is used to uniquely reference the intended instance “Lassie”. This way the “in” operator can be considered to only support the “⊆” interpretation. Checking Lassie in Collie yields true, because the set \( Lassie \) indeed contains a (unique) element which is also a member of the set \( Collie \). The good news is that, hence, “in” does not really confound the set membership and subset relations as quoted above. This confusion may still be claimed regarding a modeller’s intention but technically “in” always corresponds to “⊆”. The bad news is, however, that representing instances as singleton sets - can be very confusing for novices, and - denies the modeller the ability to distinguish between a singleton set and its instance. Novices will read “in” to mean “∈” in situations like this one: Lassie in House when trying to check whether Lassie is at home, i.e., test whether “Lassie” is among the elements of the set “House” while they are actually checking whether the (singleton) set \( Lassie \) is a subset of the set \( House \). Strangely, the below expression, using \( \text{some} \) (“∃”) to extract an element \( x \) from the set \( Lassie \), \[ \text{some } x : \text{Lassie } | \text{ x in House} \] also yields true, although there is no element \( x \) in \( Lassie \) which is a subset of \( House \). Although this may suggest, that here “in” is interpreted as “∈” after all, Alloy interprets the unique instance within \( Lassie \) as the singleton set containing “Lassie”. This also takes place when referring to generated instances, such as “Collie$0”, which are converted into singleton sets, e.g., “{Collie$0}”. Furthermore, \[ \text{some } x : \text{House } | \text{ x in House} \] yields true, if the house is not empty, again strongly suggesting that “in” is interpreted as “∈”. Yet again, however elements in \( House \) are converted to their singleton sets before the test, so that the actual “⊆” test yields the expected result. In total, even experienced modellers need to be very wary in order to avoid misreading Alloy specifications that involve instances. Sometimes an Alloy expression (e.g., \( Lassie \)) appears to denote an instance, as it is used to uniquely reference a certain element, and sometimes it clearly is used as a set (as in \( \text{some } x : \text{Lassie } | \ldots \)). While there is always a consistent technical reading of “in” as “⊆”, some of its usages are highly suggestive of an “∈” interpretation. Understanding Alloy’s results thus requires an understanding of its implicit conversion of elements to their respective singleton sets. Of course, it could be argued that the latter is not necessary and that Alloy manages to always associate the intended meaning of either “∈” or “⊆” to “in”, but this implies that one conceives to a blurring between classification and generalisation which is potentially dangerous (the intention could deviate from the actual meaning) and inappropriate for novices that have not yet been sufficiently exposed to a proper distinction between classification and generalisation. The latter is a problem when using Alloy in first year courses such as devised by Noble et al. (Noble et al. (2008)). In section 2.3.1, I already pointed out the difference between properties at the instance level and at the type level (see expression 4 versus expression 5). Due to Alloy’s approach to representing instances with singleton sets, the modeller loses the ability to separate these two levels of properties. Technically, it is an error to ask an instance for its member count since it is not a container type like a set, but an Alloy representation of an instance will happily answer “1”. This may be regarded as a feature rather than a bug since it enables specifications which are agnostic as to whether they are dealing with instances or sets. However, this convenience comes with a price because one loses the ability to detect (i.e., type check for) erroneous data flows which lead instances to appear in places where only sets should occur and vice versa. The rationale given for Alloy’s treatment of instances as singleton sets, and the corresponding apparent unification of “∈” and “⊆” to “in”, is the desire to uniformly allow the application of a single operator \[ A \subseteq B \] \[ A \in B \] A set with exactly one and thus unique instance. to both scalars (instances) and sets (Jackson (2006)). Overall, Alloy represents a highly elegant language design which enables very complex and readable specifications. However, I believe that prioritising simplicity (just one “in” operator) and uniformity (applicable to both instances and sets) does not justify the loss in expressiveness and clarity as discussed above. One could maintain the uniformity requirement of having just one “in” operator by overloading it, i.e., using the same syntax for “ε” and “⊆” but retaining their different meaning. This approach would have profound effects on the arguments. Yet, this would exclude the possibility of detecting type errors resulting from an unintended usage of a set in place of an instance and vice versa. I claim the difference between an element and a set, including the set that only contains said element, is big enough in order to abandon simplicity and uniformity in favour of improved type checking capabilities. With respect to Alloy’s treatment of instances, I therefore propose to allow the direct modelling of instances and to use two different operators for “ε” and “⊆”. 3.2 In the Name of Flexibility Motivated by the necessity to define the meaning of metalevel boundaries in metalevel hierarchies and in order to support sanity checks for the integrity of such hierarchies, Atkinson and Kühne have proposed a strict metamodelling doctrine. According to the latter it is possible to fully understand each level in a metamodelling hierarchy as being instantiated from only the level above it (Atkinson & Kühne (2001)). In order to enforce this property, the only relationships allowed to cross metalevel hierarchy boundaries are “instance-of” relationships. Subsequently, this doctrine has sometimes been criticised as leading to inflexible infrastructures, and approaches have been developed that relax the strictness requirement in order to provide more flexible metamodelling infrastructures (Gitzel & Merz (2004), Varró & Pataricza (2003)). In particular, Varró and Pataricza take issue with the strict four-layer architecture of the OMG in that it leads to scenarios in which “… concepts are replicated both on metalevel and model-level.…” (Varró & Pataricza (2003, p. 191)). As a remedy they advocate the introduction of “reification” as a unification of the notions of instantiation and specialisation, regarding the latter as being highly compatible with each other: “As a result, two model elements can simultaneously be in subtype and instance-of relations…” (Varró & Pataricza (2003, p. 194)). Varró and Pataricza even provide a proof for this proposition (Varró & Pataricza (2003, p.195–196)). Their proof relies on the fact that classification and generalisation both give rise to many-to-one relations and that there are pairs of models which can be viewed as being in a classification or a generalisation relation. However, while Varró and Pataricza would read the aforementioned “or” as a logical “or”, I maintain that it must be read as a logical “xor”. Note that their example using “Graph” and “BipartiteGraph” (Varró & Pataricza (2003, Fig. 6)) excludes attributes. If elements of “Graph” defined attributes—e.g., “Node” could have the attribute “outDegree”—one could clearly see that in the instantiation case the nodes of “BipartiteGraph” would have values for “outDegree”, whereas in the specialisation case the nodes of “BipartiteGraph” would inherit the “outDegree” attribute. An obvious solution to the “attribute” dilemma is that nodes of “BipartiteGraph” have both “outDegree” values and attributes, but this is most certainly not the intended structure. Furthermore, if one considered not only model pairs, but deeper derivation structures, the difference in transitivity between classification and generalisation would become apparent. For these reasons, I consider it inappropriate to view instantiation and specialisation as incarnations of a unified refinement notion that may occur simultaneously between two models. Gitzel and Merz aim to reduce the number of concepts in metamodelling hierarchies (Gitzel & Merz (2004)). They model “JavaAccount” as an instance of “Account” using a new form of “instance-of” (classification) relationship which “… is used in a similar fashion to inheritance relationships…” (Gitzel & Korthaus (2004, p. 72)). In essence, they are relaxing the strictness doctrine (Atkinson & Kühne (2001)) to allow instantiation across several metalevel boundaries. However, as is apparent from the “Account” / “JavaAccount” example, their new form of “instance-of” relationship in fact has specialisation semantics as opposed to instantiation semantics. Intuitively, every instance of “JavaAccount” should also (indirectly) be an instance of “Account”. Also, having elements at one metamodeling level that are instantiated from several different metamodeling levels higher up is incompatible with the requirements for a metamodel hierarchy erecting relation (Kühne (2006)), and as a matter of fact, with Russell’s Theory of Types (?). In ontological metalevel hierarchies (Atkinson & Kühne (2003)), it is obvious that instantiation has to be anti-transitive and instantiation may only occur from one level to an adjacent one. Here is an ill-formed syllogism that violates this rule, representing a logical fallacy: \[ \text{Man is a species}\] \[ \text{Socrates is a man} \quad .\quad .\quad \text{Socrates is a species} \] If “species” is replaced with “mammal” then the syllogism works as intended because then the first “is a” corresponds to generalisation as opposed to classification. The above ill-formed syllogism illustrates the inappropriateness of assuming that an instance (Socrates) could be classified by an element that is two levels higher up in the metalevel hierarchy (species). Gitzel et al. appear to require certain concepts at more than one metamodeling level since there are metamodelling hierarchies which cannot be aligned with each other (Atkinson & Kühne (2001)). In such cases, which include primitive types like “integer”, strictness can be maintained for the hierarchies individually, and a single hierarchy may be used multiple times in conjunction with another one. I believe that this is an acceptable form of replication since it corresponds to “multiple usage” rather than “multiple definition”. Gonzalez-Perez and Henderson-Sellers also appear to treat classification and generalisation as closely related relationships because they let both cross metalevel boundaries in parallel (Gonzalez-Perez & Henderson-Sellers (2006, p. 88, Fig. 20), but note that their metalevels are not aligned with metalevels, the latter being defined by classification relationships only. Although their usage-oriented layering appears to blur the differences between classification and inheritance, the underlying level hierarchy does not suffer from any such conceptual difficulties. Summarising, with respect to attempts to relax the strictness doctrine, I argue that there is no value in weakening the significance of metalevel boundaries. On the contrary, abandoning strictness opens the door for errors that no longer can be detected as such. If there is value in partitioning a metamodelling hierarchy along boundaries other than the metalevel boundaries, such structures should be overlaid, or offered as alternative views, but not undermine the integrity of the metalevel boundaries themselves. ### 3.3 In the Name of Utility In order to promote consistency and parsimony, the OMG introduced a “Core” model from which both the Meta-Object Facility (OMG (2006)) and the UML definition (OMG (2007)) are derived. Regarding the OMG introduced a “Core” model from which both the UML definition, note that it is both specialised5 from the Core (OMG (2004, p. 12, Fig. 7.2)) and also instantiated from the Core6 (OMG (2004, p. 14, Fig. 7.4)). Formally, we both have UML $\Delta$ Core (Core classifies the UML definition) and UML $\prec$ Core (Core generalises the UML definition). In section 3.2, I have already argued that this is inappropriate, i.e., impossible to maintain in a sound manner. However, I was making the assumption that both models are used with an ontological interpretation, which was appropriate regarding Varró and Pataricza’s examples. As observed by Atkinson and Kühne, however, the OMG uses the Meta-Object Facility (MOF) and hence by implication the Core, both as an ontological type model and as a linguistic type model (Atkinson & Kühne (2005, p. 409, Fig. 15(b))). In the following, I will investigate under which circumstances it is possible for a linguistic type model to be both the type model and a supermodel for another model. Formally, we are looking for elements $M$ (UML) and $MM$ (Core) which fulfil the following constraint: $$M \in \varepsilon(MM) \land \forall m \in M \subseteq \varepsilon(MM).$$ Assuming an element $m$ (a UML model), the above constraint with $\varepsilon(M) = \{m\}$ and $\varepsilon(MM) = \{m, M\}$, yields $$M \in \{m, M\} \land \forall m \subseteq \{m, M\}$$ which fulfils the constraint. The following observations are noteworthy: - The only way in which constraint 8 may (non-trivially) be true is for $M$ that has a type role. If $M$ were an instance without a type role, i.e., $\varepsilon(M) = \emptyset$, it could not meaningfully take the place of $M$ in constraint 8 since it would not have an extension whose elements may also appear in the extension of $MM$; its extension could not be non-trivially considered to be a subset of $MM$’s extension. - From constraint 8 it follows that $\varepsilon(MM)$ must contain two elements which can be considered as being in a classification relation. Formally, $$M \in \varepsilon(MM) \land \forall m \in M \subseteq \varepsilon(MM) \land \forall M \neq \emptyset \rightarrow \exists m \colon M \in \varepsilon(MM) \land m \in \varepsilon(M).$$ Hence, $MM$ must (at least implicitly) define a notion of instantiation between (some of) its elements. Indeed, the Core/MOF defines instantiation between its elements. - The above point implies that $MM$ cannot only instantiate $M$—as well as being $M$’s supermodel— but also $m$, an instance of $M$. In fact, this is the case in our example, as the MOF/Core 5Actually, the term “dependent on” is used which refers to packaging which can be appropriately regarded as specialisation. 6Actually, it is instantiated from the MOF which contains the Core. possible, I offered a logically consistent view, restoring an adequate separation of classification versus generalisation. To the best of my knowledge, in particular the formal investigation into whether the OMG’s view of the Core/MOF as both a repository format and a language definition supermodel has a sound interpretation, is novel. Third, I argued that acknowledging the differences between classification and generalisation—e.g., by complying with the “strict metamodelling” doctrine—one gains an opportunity for sanity checks regarding the integrity of metamodelling hierarchies that otherwise would not exist. Detecting incorrect dataflow in specifications becomes possible if instances and types are not substitutable for each other. Logical fallacies that are introduced by crossing multiple metalevel boundaries at once or introducing circular definitions are impossible, if metalevels are stratified according to Russell’s Theory of Types. Finally, a metamodel can be defined to have a dual purpose in a sound manner, if it features elements that can classify each other. Ultimately, there is no single correct way of designing languages and in particular Alloy’s prioritisation of flexibility over safety can certainly be defended. Also, I am aware that the authors of the work which I subjected to some critical remarks may take different definitions for classification and generalisation as a basis and hence arrive at different conclusions regarding their compatibility, maintaining internal consistency. However, I hope that the observations made in this paper may be of use for future language designers. While they may not choose to subscribe to the most rigorous treatment I have suggested, they will at least be in a position to consciously deviate from it, explicitly rationalising as to why a non-strict treatment should be preferred and whether it is worth accepting the resulting loss in precision and loss of sanity checks. Acknowledgements I would like to thank Lindsay Groves for his very helpful comments on a draft of this paper. References
{"Source-Url": "https://homepages.ecs.vuw.ac.nz/~tk/publications/papers/classification-generalisiation.pdf", "len_cl100k_base": 8421, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25939, "total-output-tokens": 10270, "length": "2e13", "weborganizer": {"__label__adult": 0.0004100799560546875, "__label__art_design": 0.0008792877197265625, "__label__crime_law": 0.00039076805114746094, "__label__education_jobs": 0.002597808837890625, "__label__entertainment": 0.0001251697540283203, "__label__fashion_beauty": 0.0002290010452270508, "__label__finance_business": 0.0003142356872558594, "__label__food_dining": 0.000507354736328125, "__label__games": 0.0004780292510986328, "__label__hardware": 0.0006399154663085938, "__label__health": 0.0006990432739257812, "__label__history": 0.0004930496215820312, "__label__home_hobbies": 0.00016498565673828125, "__label__industrial": 0.0007047653198242188, "__label__literature": 0.0014390945434570312, "__label__politics": 0.0004715919494628906, "__label__religion": 0.0008673667907714844, "__label__science_tech": 0.08819580078125, "__label__social_life": 0.0002162456512451172, "__label__software": 0.00836181640625, "__label__software_dev": 0.890625, "__label__sports_fitness": 0.0003266334533691406, "__label__transportation": 0.000774383544921875, "__label__travel": 0.00022971630096435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40271, 0.02982]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40271, 0.52593]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40271, 0.9051]], "google_gemma-3-12b-it_contains_pii": [[0, 4802, false], [4802, 11999, null], [11999, 17535, null], [17535, 24042, null], [24042, 31186, null], [31186, 34525, null], [34525, 40271, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4802, true], [4802, 11999, null], [11999, 17535, null], [17535, 24042, null], [24042, 31186, null], [31186, 34525, null], [34525, 40271, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40271, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40271, null]], "pdf_page_numbers": [[0, 4802, 1], [4802, 11999, 2], [11999, 17535, 3], [17535, 24042, 4], [24042, 31186, 5], [31186, 34525, 6], [34525, 40271, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40271, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
a0ec6e02e20b6b29a3620e537f7324922fdaece1
MODIFICATIONS AND INNOVATIONS TO TECHNOLOGY ARTIFACTS KEVIN DESOUZA University of Illinois at Chicago, kdesou1@uic.edu YUKIKA AWAZU Institute for Engaged Business Research, awazu@engagedenterprise.com ARKALGUD RAMAPRASAD University of Illinois at Chicago, prasad@uic.edu Follow this and additional works at: http://aisel.aisnet.org/digit2004 Recommended Citation DESOUZA, KEVIN; AWAZU, YUKIKA; and RAMAPRASAD, ARKALGUD, "MODIFICATIONS AND INNOVATIONS TO TECHNOLOGY ARTIFACTS" (2004). DIGIT 2004 Proceedings. 3. http://aisel.aisnet.org/digit2004/3 This material is brought to you by the Diffusion Interest Group In Information Technology at AIS Electronic Library (AISeL). It has been accepted for inclusion in DIGIT 2004 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. MODIFICATIONS AND INNOVATIONS TO TECHNOLOGY ARTIFACTS KEVIN C. DESOUZA* Department of Information & Decision Sciences University of Illinois at Chicago 601 South Morgan Street, M/C 294 Chicago, Illinois 60607 USA Phone: +1 312 829 8447 E-mail: kdesou1@uic.edu YUKIKA AWAZU Institute for Engaged Business Research Engaged Enterprise 555 West Madison Street Tower1, Suite # 3705 Chicago, Illinois 60661 USA E-mail: awazu@engagedenterprise.com ARKALGUD RAMAPRASAD Department of Information & Decision Sciences University of Illinois at Chicago 601 South Morgan Street, M/C 294 Chicago, Illinois 60607 USA E-mail: prasad@uic.edu * CORRESPONDING AUTHOR NOVEMBER 17, 2004 Paper Presentation at Diffusion Interest Group in Information Technology - DIGIT 2004 (Washington, D.C) 1 We thank members of DELTA for their participation in the project. We also would like to acknowledge comments received by two anonymous reviewers that helped us improve the quality of the paper. MODIFICATIONS AND INNOVATIONS TO TECHNOLOGY ARTIFACTS ABSTRACT What happens to a technology artifact after it is adopted? It has to evolve within its particular context to be effective; if it doesn’t, it will become part of the detritus of change, like the many genes without a discernible function in a living organism. In this paper, we report on a study of post-adoption technology behavior that examined how users modified and innovated with technology artifacts. We uncovered three types of changes conducted to technology artifacts: personalization, customization, and inventions. Personalization attempts are modifications involving changes to technology parameters to meet the specificities of the user; customizing attempts occur to adapt the technology parameters to meet the specificities of the user’s environment; and inventions are exaptations conducted to the technology artifact. The paper presents a grounded theoretic analysis of the post-adoption evolution based in-depth interviews with 20 software engineers in one multi-national organization. We identify a life-cycle model that connects the various types of modifications conducted to technology artifacts. The life-cycle model elaborates on how individual and organizational dynamics are linked to diffusion of innovations. While the research is still in progress and the post-adoption evolution model has to be refined, the research has significant value in understanding the full life-cycle of adoption of technological artifacts and how is maximum value derived from them. 1. INTRODUCTION The current IS literature on innovations, is rich in studies addressing factors that contribute (or hamper) the decision to adopt technologies (Zmud, 1983; Swanson, 1994; Rai, 1995). In comparison, to this rich literature there are only a few studies that have examined post-adoption decision behavior (Limayem et al. 2003; Bhattacherjee 2001). As noted by Bhattacherjee (2001), “while the initial acceptance of [IT] is an important first step towards realizing [IT] success, long term viability…and its eventual success…depend[s] on its continued use rather than first time use”. Our understanding of how users interact with technologies after their decision to adopt is scant. Researchers like (Orlikowski, 1993; Majchrzak et al. 2000; Poole and DeScandtis, 1990; Orlikowski and Yates, 1994; Yates and Orlikowski, 1992) have built on the work of Giddens (1984) to uncover the dynamics of technology structuration in collected (i.e. teams, groups, or organizations) settings. Their findings tell us that technology use does not occur in a deterministic fashion, rather it is emergent. Technology is frequently structured by the individuals to meet their contexts. While we know that technology gets structured we do not know the nature of these structurations. In this paper, we will describe three ways in which users modify (structure) technology artifacts. The IS literature, has also, for the most part treated users as passive in takers of technology. To do this, is to ignore the fact that users are “knowledgeable” and are “creative” in how they use technology. As rightly pointed out by Nambisan et al. (1999, p. 365) “Technology users, by and large, have been treated as passive recipients of innovative artifacts. Indeed, a dominant view in the IS innovation literature continues to be a technology transfer perspective where the locus of creative activity is the IT organization”. With the trends in current technology development we cannot afford to ignore user-technology interaction dynamics. As noted out by von Hippel and Katz (2002), and Thomke and von Hippel (2002), customers (users of technology) are innovators. Many organizations have abandoned the act of trying to figure out customer requirements in the design process of product development, and have equipped users with toolkits. This is because much of the information possessed by customers is “sticky” (von Hippel, 1991); hence the customer has a hard time articulating these needs to the product designers. User toolkits allow the customers to conduct innovations and build variations or products to meet their idiosyncratic and peculiar needs. The use of toolkits has been shown to increase customer satisfaction, save organizations the cost and effort involved in articulating user needs, and also reduce design and development times (Thomke and von Hippel, 2002). Now, if we were to examine the state-of-the-art in information technology and systems development we see a similar movement occurring. Traditionally, organizations spent great effort, time, and resources to elicit user requirements prior to systems development. The use of the waterfall development model was popular, however, as we quickly realized, users cannot clearly articulate their needs. More frustrating for designers was the fact that requirements changed on a temporal basis. This resulted in poor quality of systems development, runaway projects, decline of trust in the IS/IT function, and many more adverse effects (Keil and Rai, 2000). Today, we are moving to more agile development methods e.g. Extreme Programming. The goal here is to bring the user into the design and development phases. By bringing the user closer to the design, feedback will be forthcoming on a regular basis and a more acceptable system will be calibrated. The next logical step is to put the user in control of innovations and modifications to technology artifacts. In the mobile phone industry, users are being provided with toolkits that they can use to customize the interface, write their own code, develop their own procedures, write games, etc (Füller et al., 2004). As researchers we must focus more energies to gain an understanding of how users modify (a form of innovation) technology artifacts. An understanding of this will help us better prepare for innovations and manage the innovation cycles. For instance, organizations can use the innovative power of users to decide how to enhance its existing product offerings and to understand future trends in the marketplace. As pointed out by von Hippel (1996), in his conceptualization of “lead users” - “Lead users face needs that will be general in a marketplace – but face them months or years before the bulk of the marketplace encounters them”. These users innovate with technology at a rapid pace, and have foresight as to the future enhancements and updates needed to current technologies and applications. As such, an organization will be foolish not to tap into them for market and forecasting insights, moreover, their “modification” to the technologies could be used in future version or product updates. Given the above gaps in the literature, the goals of this paper are – [1] to elaborate on ways users modify technology artifacts and [2] to highlight a generic process model that connects the various types of modifications in a process maturity fashion. The rest of the paper is organized as follows. Next, we briefly elaborate on our methodology. In section 3, we will highlight the various kinds of modifications. Following this, in section 4, we will propose the generic life-cycle model and also discuss a few variants that might exist. Section 5, concludes the paper with a look at our ongoing research, and implications for practitioners and scholars. 2. METHODOLOGY The focus of this paper is on theory building rather than theory testing. Due to the lack of existing frameworks to guide our investigation and due to the novel nature of the phenomenon being examined we chose to conduct a qualitative research study (Trauth, 2001; Benbasat et al., 1987). Among the rich array of qualitative methodologies available, we chose to approach the research questions using a grounded theory approach (Glaser and Strauss, 1967). The grounded theory approach has several salient peculiarities that make it apt for the current research. The aim of grounded theory is to allow the theory to emerge, rather than impose an existing theoretical frame The researcher can begin coding once data is collected, where ambiguity and equivocality exists, the researcher is allowed to go back to the research site and seek clarification. New information is then synthesized with the existing conceptualizations and reinforcements or modifications are conducted. Following, Tyre and von Hippel (1997), Van de Ven and Poole (1990), and Van de Ven and Polley, (1992), we focused on specific events as the unit of analysis. The event of interest was the modification of the technology artifact by the user. Table 1: Demographics on Interview Respondents <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Mean Tenure in the Organization</td> <td>6.23 Years (Min 1.2; Max 8.2)</td> </tr> <tr> <td>Mean Tenure in Software Engineering Positions</td> <td>1.82 Years (Min .5; Max 2)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Previous Employment Positions</th> </tr> </thead> <tbody> <tr> <td>Accounting and Financial Analyst</td> </tr> <tr> <td>Business Analyst</td> </tr> <tr> <td>Marketing and Client Services</td> </tr> <tr> <td>Management Consultants</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Education and Professional Training</th> </tr> </thead> <tbody> <tr> <td>Number that Possessed Formal IS Education</td> </tr> <tr> <td>Most Common Education Degree</td> </tr> <tr> <td>Highest Education Level</td> </tr> <tr> <td>Lowest Education Level</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Information Systems and Programming Experience</th> </tr> </thead> <tbody> <tr> <td>Expertise in Programming</td> </tr> <tr> <td>Attended Programming Computer Classes / Certificate Programs</td> </tr> <tr> <td>Number of Programming Computer Classes / Certificate Programs</td> </tr> </tbody> </table> Data for the study was gathered from one organization. The organization, DELTA, a pseudonym, is in the software development business. The organization has offices in 6 North American locations, 2 Europe locations, and 1 location each in Australia and South America. We gathered data from the US Midwest location. Data was gathered through multiple mediums. First, the organization decided to allow its software engineers to participate in a survey. The survey elicited ways in which the engineers modified their Integrated Development Engine and basic demographic information about the software engineers, such as tenure with the organization, experience with programming, etc (see Table 1). Table 1 contains demographic information on our interviewees. We elicited questions in four main areas - employment histories; education and professional training, information systems and programming experiences, and basic information on the type of modifications conducted to the IDE. Table 1 reports on the first three types of information elicited, and we will describe the type of modifications in the rest of the paper. This survey helped us gauge the kinds of modifications we would encounter and gave us a background to begin interviews. DELTA’s software engineering force was peculiar in one respect. Most of the software engineers did not come from traditional programming or computer science backgrounds; rather, they were originally in the business and management domains of the organization. These included being in areas such as marketing, consulting services, operations management, and even accounting and financial functions. This salient point makes our findings more interesting, as our sample of software engineers truly represented “customers” of technology. We gathered data from software engineers who were relatively new to the organization or who had been programming for no more than 2 years. Most of our interviewees had transitioned into their “programming” roles due to downsizing efforts at the organization. The organization decided that it was in their best interest to have individuals who possessed business knowledge conduct the design functions as well so that there would be less ambiguity and risks in understanding client needs. We chose to focus our attention on ‘new’ software engineers as this allowed us to gain a sense of how novices to given a technology would engage in customization, modifications, and innovations to the technology artifact. Prior research has shown that novices and experts do not solve problems or approach problem formulation in similar manners (Simon, 1947). Focusing on novices allows us to construct a process model of how changes to the technology occur as the individual improves her knowledge of the application and also the individual’s surrounding (her team or group) gets acquainted with the application. Focusing on ‘new’ engineers affords us an opportunity to understand the life-cycle of modifications and innovations to technology artifacts that will not be possible if our sample consisted of ‘expert’ engineers or those who possessed significant experience-bases. We interviewed 20 software engineers on their usage of an Integrated Development Engine (IDE), specifically the Microsoft Visual Studio. Software engineers were asked to elaborate on the nature of modifications conducted to the IDE and the antecedents and consequences of these changes. We asked them to try to recollect their experiences from the most distant event they remembered in terms of modifying the IDE. Using this as a starting point, we moved to the present time. Each interview lasted for about one hour. Interviews were recorded and later transcribed for analysis. In addition to interviews we examined the technology artifacts. Interviewees were asked to bring their laptops to the interviews; this enabled them to visually demonstrate the nature of the modifications conducted to the default IDE interface. Conducting observations to the technology artifact enabled us to verify the accuracy of the software engineer’s recollection, and also enabled us to check for over or under-estimation of the nature of the modification. Before, moving on the rest of the paper, cautionary comments are in order about the methodology. This study is on going. We are yet to analyze all of the data as such our findings are preliminary, we may even have to go back to DELTA to seek more information. Second, even though we strived to reduce errors due to recallability of past events by observing the modifications to the technology artifacts, these errors might still be present. We asked engineers about historic events and did not actually see the factors that led up to the modifications. Hence, as part of our on going effort, we plan to investigate the research question in a different organization to seek further validity of our findings. Even with these cautionary comments, we believe the contributions of this paper are significant and warrant discussion in the IS community. 3. MODIFICATIONS TO TECHNOLOGY ARTIFACTS Individuals use technology to accomplish needs. This need can be one of administrative assistance (e.g. use of a calculator), strategic planning (e.g. decision support tools), and even for entertainment and leisure (e.g. playing computer games). Regardless of the type of need, users of technology are rational, in that they will use technology when there is an economic justification to do so. It is this rationality, which makes for the underpinnings of conducting modifications to technology artifacts. Consider a simple example, if you are accustomed to having page margins set at 1 inch all around, and the default on your word processor is 1.25 inches, what will you do? One option, the non-economical one, is to manually alter the page setup for every document you create. The more economical approach is to customize (modify) your word processor to meet your needs. This modification needs to be done once to suit your needs. The costs of conducting the modifications are low compared to the future benefits. ### Table 2: Definitions of Modifications <table> <thead> <tr> <th>Type</th> <th>Definition</th> <th>Example</th> </tr> </thead> <tbody> <tr> <td>Personalization</td> <td>Changes to the technology artifact by modifying pre-defined user options to meet the needs of the individual user.</td> <td>Personalizing the appearance of Toolbars</td> </tr> <tr> <td>Customization</td> <td>Changes to the technology artifact by modifying pre-defined user options to meet the needs of a collected setting.</td> <td>Customizing the directory structures for program output to meet organizational standards</td> </tr> <tr> <td>Invention</td> <td>Changes to the technology artifact by creating add-ins or using existing functions for novel purposes</td> <td>Inventing debugging add-ins to facilitate effective and efficient testing of software modules</td> </tr> </tbody> </table> Modifications to technology artifacts can be simple and complex (see Table 2). The motivation for modification can be to meet the needs of the user or to meet the needs of a collective entity such as a team or organization. In our research, we uncovered three types of modifications. Modificatios to technology can be examined based on two criteria – scope and role (Orlikowski, 1992). What comprises the technology can be considered as scope and how the technology is used in the organization is the role. In this paper, we will focus on the changes to the scope of the technology. However, we must admit that some of the changes to the scope will impact the role of the technology. An IDE has a definite purpose, which can be broadly stated as enable for the effective and efficient generation and management of software applications. While modifications to the scope of the technology will not result in a change to the overall goal of the IDE, it might help users realize new facets and functionalities that were latent. For the purposes of this paper we view modifications and innovations to technology artifacts as type of technology structuration. Specifically, we are concerned with technology structuration involving changes to the components. --- 2 We thank an anonymous review for helping us clarify our thinking on the definitions and labels for each type of modification. 3 We thank an anonymous reviewer for drawing our attention on this point. of the technology artifact, i.e. changes to items belonging within the scope of the technology artifact. **Personalization: Modifications for Flexibility** Personalization is defined as changes to the technology artifact by modifying pre-defined user options to meet the needs of the individual user. A user’s need for flexibility when working with a technology artifact drives the need for personalization. Consider the following comments by software engineers in Table 3. <table> <thead> <tr> <th>Table 3: Interview Quotes on Personalization</th> </tr> </thead> <tbody> <tr> <td>“If you were to visit most of our start-up screens [of the IDE]…you would find that they are all different…some have 10 icons for a File Menu…some may have 20 [icons]…some others have toolbars combined….mine reflects the most common options I use…”</td> </tr> <tr> <td>“Being in QA [Quality Assurance]…my world rotates mostly on two views of the IDE…the testing and debugging options…are what I need to use the most often…having them buried down two levels is a pain…I move them to the foreground…”</td> </tr> <tr> <td>“I like my screen layout the following way…this is how I have it on my home PC…it gives me a comfort zone…I do not have to scramble around every time…customizing the layout was one of the first things I did…there was enough to learn about the tool without getting confused about locations of familiar items…”</td> </tr> </tbody> </table> The motivation for conducting acts of personalization is to make the technology *flexible* to the needs of the user. Drawing on the usage of the term flexibility in industrial engineering (Barad and Nof, 1996), we can define being flexible as the ability to work within a given range. In the context of technology, flexibility calls for changing the established parameters of the technology. For example, changing the appearance of a toolbar by moving one or more icons is an act of modification for flexibility. Changing the background of the display screen is also a modification for flexibility. The user is not creating anything new here; rather they are personalizing an existing option of the technology within the bounds set by the technology creator. Personalizing the technology artifact can be seen as a way to make the technology flexible with the users style of work, work practices, and other preferences. In our discussions with software engineers, we found a wide assortment of modifications for flexibility. These included – changing the default directory pointers, customizing the appearance of the screens, customizing drop-down menus, etc. Personalizing is the most basic and simple in nature, and is driven by the needs of the user. Each user will decide, based on their mental models and task peculiarities, the nature and scope of the customization. The more tech-savvy a user is and the more often a user interacts with the technology the greater is the propensity to conduct acts of modifications for flexibility. Economics dictates that a user is better off personalizing the technology artifact once, rather than attempting to modify it in a repeated per-use basis. **Customization: Modifications for Adaptability** Customization calls for making changes to the technology artifact by modifying pre-defined user options to meet the needs of a collected setting. Here the user is adapting her technology artifact to make it suitable for effective and efficient conduct of work practices in a collected setting. Customizations for adaptability are modifications to the technology artifact motivated by the need for effective and efficient conduct of group work. Modifications for adaptability differ from the modifications for flexibility on one salient point – here, the user is adapting to the external environment. Modifications for adaptability are not driven by the individual needs of the user, but are a result of the user’s involvement in an environment. The environment can be the user’s work team, group, or even the organization. Consider the following comments by software engineers in Table 4. <table> <thead> <tr> <th>Table 4: Interview Quotes on Customization</th> </tr> </thead> <tbody> <tr> <td>“During the initial days, we were ‘experimenters’…none of us knew the most ideal way of doing something…we had twenty different ways of doing something simple…today…we have agreements in place…they determine how we must use the Damn Thing (IDE)…I had to re-organize my directory structures to meet these requirements…”</td> </tr> <tr> <td>“Some complain we have standards…but these have come about from screw-ups…mainly the right hand washing away what the left does…so we all customized our tools to some universally agreed dimensions…”</td> </tr> <tr> <td>“My initial changes (personalization) are still in place…these do not affect anyone…but how I name files or where I post them or managing version issues, these are bigger than me…we agreed on global standards that all developers would adhere to…these called for changes to our individual IDE environments…small stuff but important…”</td> </tr> </tbody> </table> In our discussions with software engineers, we deduced that the need to adapt is governed by one’s workgroup and project. As discussed at length, in the work on adaptive structuration, the use of technology is socially constructed and is hence influenced by the social context. For example, Orlikowski (1993) demonstrated the existence of the structuration processes in her study of computer-aided software engineering tool adoption. Majchrzak et al. (2000) described the existence of structuration in studying the adaptation of collaborative technology in virtual team settings. We will not elaborate on the work on the adaptative structuration here, however we wanted to acknowledge the contribution of this work to the understanding on how adaptation occurs in group settings. We do however want to discuss a finding not addressed thus far in the adaptative structuration work – the role of standards. The topic of IS standards has recently been an area of heightened research interest (King and Lyytinen, 2003). In the context of software engineering, customizations occur to meet standards. Standards can be categorized based on the dimensions of purpose (reference point or compatibility) and enforcement (voluntary or mandatory) (Hemenway, 1975; Antonelli, 1994). Software engineers must customize their technology for all combinations of the 2X2 matrix of standards. In the context of working in a group, software engineers have to customize their directory parameters to point to the common repositories in order to jointly work on the code; these represent mandatory standards that seek to enhance compatibility. As most software engineering is now conducted on a global basis, IDEs must be synchronized in terms of language, date, time, etc. These are mandatory standards that seek to enforce clear reference points. Two or more individuals working in close quarters might create their own standards to facilitate ease of sensemaking (Weick, 1979). In our discussions, we were advised of a case where three software engineers voluntarily decided to customize their IDE desktops for uniformity so that each of them could use the other’s PC in case one was away from the office and work called for the use of the PC. This represents a voluntary standard meant to increase compatibility and serve as a reference point between the engineers. To summarize, modifications for adaptability are conducted to customize one’s personalized technology artifact to meet the standards set by his/her work group. **Inventions: Modifications for Exaptability** Inventions are changes to the technology artifact by creating add-ins or using existing functions for novel purposes. While adaptation is accumulation of small changes over time to improve an existing function, exaptation is accumulation of small changes that results in development of a new function. Exaptability is defined as the ability to develop new functions or the utilization of a structure or feature for a function other than that for which it was developed through natural selection (Gould, 1991). Inventions include additions to the existing technology artifact and/or discovering new functions for existing components of the artifact. Inventions are commonly defined as either a new combination of components or a new relationship between previously combined components (Henderson and Clark, 1990; Schumpeter, 1939). As pointed out by Schumpeter (1939, p. 88), “innovation combines components in a new way, or…carrying our new combinations”. Frustrations with the existing technology artifact coupled with unfilled necessities are critical determinants of one’s motivation to invent. Consider the following comments in Table 5. <table> <thead> <tr> <th>Table 5: Interview Quotes on Inventions</th> </tr> </thead> <tbody> <tr> <td>“The creators of the IDE were forced to be broad and all inclusive in options, tools, functionalities, etc…however we still need more…we have been forced to create our own solutions and add-ons as they are needed for our work…”</td> </tr> <tr> <td>“Why did I write this script…to be frank, because I needed it and it saved me time”</td> </tr> <tr> <td>“Everyone was using the directory function for storing files…this is nice…but I came up…and it helps in addressing version control issues…we have local and global directories that are synchronized via a routine…naming conventions are addressed in the background and conflicts in version managed…this was not part of the original IDE…but it has sure made things easier here…”</td> </tr> </tbody> </table> Modifications for exaptability include creating add-ins, scripts, modules, etc to enhance the productivity of the technology. These modifications are “in-addition” to the existing technology and to be used in conjunction with the original technology. In the case of IDEs these add-ins are used to increase the efficiency and effectiveness of programming assignments. For instance, one developer, working on a financial trading module, was frustrated with the way he could run testing using the default setup of the IDE. His frustration led him to compose a macro that read his test data, ran his program, and outputted results. Results were then fed through a statistical package for analysis and the final output was visually displayed using a graphics editor. Exaptations such as these can be considered as inventions. In addition to building new components, exaptation is also the use of existing functions in novel ways. These are most commonly referred to as “work-a-rounds”. Due to the limitations with the technology artifact, users find new ways to use existing functions in order to meet their needs. The simplest example is found in the use of statistical packages. Most statistical packages are highly restrictive in terms of the number of variables, types of variables, parameter requirements, etc. To counter these restrictions users create schemes such as “dummy coding of variables” to work around them. In the context of IDEs, we found software engineers also create work-a-rounds for increasing the effectiveness of tasks. Most work-a-rounds created had to do with the testing and debugging phases of writing code. For instance, the uses of the work-arOUNDS were common to tweak the input and output of test data. In one case, a software engineer was frustrated with the lack of effective integration between output files of Microsoft’s Excel and Project software. He took it upon himself to build a work-around using Visual Basic that would integrate the two output files so that a project manager could easily move data between a costing tool (that used a spreadsheet interface) and his administration tool (that used a project management/Gantt chart layout). Exaptations can occur to meet the needs of the individual or a group. Individuals, as the case above described, may get frustrated with the existing functionality of the technology and develop their own inventions. Similarly, a team working on a project may exert effort to innovate a new feature because of the benefits it poses to their project and work. Individually might collectively pool resources in order to build a new technology feature or add-in. As can be witnessed from the proliferations of altruistic software communities, users have a tendency to contribute resources when there is hope for a better and more robust solution than what is currently available. Exaptations are the most complex form of modifications conducted to technology artifacts. 4. PROCESS MODEL FOR USER MODIFICATIONS How do these modifications occur in the organization? We think there is a sequence to them, and will now propose a process model to explicate how modifications occur in the context of the organization. This life-cycle model was deduced from our discussions with software engineers. We must note that this only “one” possible view and was the most dominant view based on our data analysis, towards the end of this section we will discuss possible alternations/variants that are plausible. As noted by Van de Ven (1992), the first step towards studying a process activity, is to clearly define the meaning of process. In this paper, we use the term process to mean a sequence of events that describe how things change over time. This definition of process, “takes an historical development perspective, and focuses on the sequences of incidents, activities, and stages that unfold over during the duration of a central subject’s existence” (Van de Ven 1992, p. 170). Examples of prior process models in the literature include the work of Mintzberg et al. (1976) on unstructured decision making and Cohen et al. (1972) on garbage can decision models. Upon defining the meaning of process, we must now be specific on the type of process model we are building. In this paper, we are constructing a life-cycle model (Van de Ven and Poole, 1988; Piaget, 1954). Life cycle theories assume that change is evident and that change is recognizable as the artifact is transformed from its present state to a future state (Van de Ven and Poole, 1988; Van de Ven, 1992). As an example of a change model, consider the work of Piaget (1954). Piaget (1954) proposed various stages a child will go through as they learn and acquire knowledge about themselves and their environment, passing through the various stages contributes to the maturity in child development. The model we propose in this paper is a maturity/life-cycle model (see figure 1). It explicates the types of modifications conducted as the user becomes more sophisticated (mature) in their use of the technology. As the users knowledge on how use the artifact increases, we can expect the sophistication of modifications to increase in complexity. In order to increase one’s knowledge of the technology, the user must interact with the technology in a frequent manner. Much of the learning associated with technology, is learning-by-doing (Tyre and von Hippel, 1997), rather than learning before doing. Because of the need to repeatedly interact with the technology artifact, users will be rational and find ways to make it economically efficient. We will now describe the various stages of the process model. The model is composed of five stages. Each stage signifies a maturity level of how the technology is used by the individual. Operability and agility are the beginning and closing stages respectively, and the remaining three signify the types of modifications that occur. The levels are influenced by individual characteristics, the local group the user belongs too, and the organization at-large. We will first focus on the linear trajectory between the various stages, represented by the black line. Following this, we will discuss some variations that are plausible, these are represented by the dotted red lines. Stage 1: When a user is first introduced to a given technology artifact, he/she must learn the “bear essentials” needed to get the technology in a state in operation – the “operability” stage. The operability stage is influenced by whether the user has had prior exposure to the technology (e.g. a past version of the software) and has prior exposure with similar technology artifacts (e.g. prior use of Notepad or WordPad will help a user gain operational knowledge of how to work with Microsoft Word) (Huber, 1990; Tyre and von Hippel, 1997). The operability stage is also present when users take it upon themselves to experiment with new technologies without organizational mandate. For instance, the diffusion of SMS messaging systems in organizations has been shown to occur from a bottom-up approach. A select group of users may begin to use it to enable easy communication, and then the use may spread to other members of the organization. When the users first begin to explore with new technologies they are left on their own to figure things out, hence they must rely on their personal knowledge or access to personal knowledge resources such as friends who may know about the technology. Stage 2: Over time and through continued exposure and interaction with the technology artifact, the user will begin to conduct modifications – the “flexibility stage”. These will take the form of personalization. As discussed earlier, personalization enables the user to customize the artifact to meet individual needs and preferences. The users begin to increase their comfort zone with the technology artifact, and in doing so are more capable and amenable to taking risks in personalizing the technology artifact. As one software engineer remarked, “during the initial usage of the IDE, I was scared to mess around…I did not know what would happen if I changed an option…would it be that I would screw things up…this may call for me to re-load everything….“. Once a user attains a comfort zone they are willing to personalize the technology artifact so that they do not want to keep re-doing mundane tasks, such as changing the directory name from the ‘default’ one to one that is needed by their task. Stage 3: As the technology diffuses through the organization and its usage increases by organizational members, standards will emerge in order for organized work to take place in an efficient and effective manner. At this time users will be forced to customize the technology to meet these requirements – the “adaptability stage”. Standards emerge or are enforced for the simple reason of conducting group work in an effective and efficient format. Stage 4: Users continue to innovate with the technology after adaptation to organizational standards, these innovations lead to the development of novel functionalities – “exaptability” stage. At the exaptability stage users are looking at ways they can push the technology artifact further. This will require users to realize the weaknesses, limitations, and shortcomings of the artifact and building suitable solutions. Not all users will have the capacity to innovate nor the resources required to do so. As noted by one engineer, “Jason [another Software Engineer] is ahead of most on the learning curve…sometimes he comes up with new ways of doing something that the rest of us marvel at… I am not so wise…Plus, I do not have the same time constraints as Jason…he is on projects that have more slack….mine [projects] are factory-minded, in and out, and with the least time and effort…”. Stage 5: As a user continues to innovate with the technology, he/she will ascend to the status of an expert or super-user. The user becomes knowledgeable about the intricacies of the technology artifact and can make changes to it under pressures of time and resource constraints. The user will be able to work with the technology artifact in an agile manner. The agile stage is characterized by high proficiency in the use of the artifact. At the agile stage, a user is not just using the technology but is exploiting it to the maximum, and figuring out how to enhance it by adding or changing the artifact. In von Hippel’s conceptualization, we can consider users of technology at this level as “lead users” (von Hippel, 1996). The above model is supported by the literature. Most novice users when first introduced to a technology are overwhelmed by the complexity of the artifact. Due to this overwhelming nature, users opt to satisfice (Simon, 1947). The primary concern of the user is to get the technology artifact in an operable condition – operability stage. As such, their first response to accumulate basic knowledge on how to operate the artifact without getting bogged down by details. In order to do this, they are likely to engage in acts of exploitation (Cyert and March, 1963). Exploitation of past knowledge they possess is conducted. This past knowledge could be experiences with a similar technology (such as another programming interface), a technology used in the context of a similar task (such as the use of MATLAB for the calibration of financial and statistical operations), or even their own past knowledge about technology in general (such as the use of a drop down list). These recollections are based on previous events; as such they are acts of learning-before-doing. An individual must also engage in learning-by-doing. Only through the process of experimenting with the technology i.e. putting past knowledge to work in the context of the new technology, will a user be able to comprehend whether the past knowledge is of any use or not. Experimentation is a fundamental activity in the calibration of innovations and also is a vital aspect of any learning process (Adler and Clark, 1991; Thomke, 1998, Smith and Eppinger, 1997). As noted by Von Hippel and Tyre (1994, p. 25), “The need for learning-by-doing indicates that the innovation process will often be iterative and that developers typically can’t “get it right the first time””. It is through the continued exposure and experimentation with the technology that the user will increase his/her stock of knowledge regarding the artifact (Huber, 1991; Senge, 1990; Von Hippel and Tyre, 1994). Prior research on innovation is supportive of our conceptualization of the flexibility stage. Research has shown that users have a greater propensity to conduct innovations in the development of new products and services to meet their local needs rather than engage in acts of innovation which appeal to a broader audience due to the costs involved in securing their innovations (Harhoff et al., 2002). Moreover, users are more likely to use their existing knowledge to develop the innovation rather than search for outside knowledge due to the cost involved in conducting the search (Luthje et al., 2002). Local information is that which an innovator already has on-site prior to innovating. Local information, in our context, is the peculiarities and idiosyncrasies of the user and how they want to engage the technology. As pointed out by Harhoff et al. (2002), one of the benefits of users first focusing on their local needs is the “low-cost innovation zone”, users need not concern themselves with the needs of the population at-large. Focusing on the needs of the population at-large is risky (as one may not be able to develop innovations beneficial to the rest) and also is costly in terms of effort. As noted by Davenport et al. (2003), in their study of intellectual asset re-use, on average it takes three times more effort to develop a knowledge nugget for use by the organization at-large than to create one for personal use. We must note that not all users will engage in acts of personalization, to the same degree or frequency. Novice users of technology are on average risk averse, compared to experts (Kahneman and Tversky, 1979). Conducting acts of personalization call for conducting changes to the original parameters of the technology, as such they contain an element of risk. Experts, who possess domain knowledge, will be better able to judge the degree of risk and either conduct or refrain from personalizing the software. Novices may not be able to make this judgment and hence may avoid personalizing the artifact, at least during the initial periods of technology use. Adaptation will occur as a means to synchronize individual efforts for the achievement of organizational goals. Without adaptation, users will engage in conflicts when the technology is used as there will lack of conformity. The recent work on user toolkits and customer innovations supports the fact that users have the capability to radically modify products and designs to meet their needs – exaptability stage. The rich literature on decision making (Simon, 1947), has attested to the fact that experts have the ability to deduce patterns with ease, use their rich source of experiences to solve novel problems, and even re-design artifacts. Users that possess deep knowledge about technology artifacts are lead users (von Hippel, 1996). They have foresight and use the technology in an optimal and complete manner; as such they are apt to discover the limitations of the artifact. This forces them to innovate to meet their needs. By example, a novice using Microsoft Excel may not know the limitation of the powerful spreadsheet tool; however, most expert users write their own macros and routines to improve the functionality of the tool. The agility stage is the end stage of the life-cycle model. Here is where one is deemed an expert in the use of the technology. **Alternative Connections between Stages** While we have discussed a straightforward and linear progression between the various stages, there can be variations. Due to space limitations we cannot cover these in any depth, but we will like to allude to them, and point out that future research is needed to investigate them in more detail. There can be instances where Stage 1 is followed by Stage 3, with Stage 2 being skipped. This dynamic is possible under several scenarios. First, when the individual user is getting started with a fairly mature technology that has a rich history in the organization. The individual user may need the time and space to get operational with the technology. However, soon after this they will be introduced to existing organizational standards and will be asked to customize their artifact to meet these requirements so that they can begin to conduct work in the collected setting. Only after this, will the user be able to increase his/her exposure to the artifact and begin to reach a comfort zone to personalize the technology. Second, if there is an organization-wide initiative to introduce a software application, chances are high that there would be a dominant group overseeing the effort. This group may calibrate standards and rules to govern the usage of the application so that coordination and compatibility issues are addressed. Individual users, under this scenario, will also be required to customize their individual artifacts to meet these standards and then personalize the rest of the artifact to their peculiarities. Another common feedback loop is where Stage 4 (Exaptability) is followed by Stage 3 (Adaptability). We posit that this could be common in cases where we have a highly innovate group of technology users. The innovative class of users will constantly see ways to push the boundaries of the technology. In doing so, they will spread such knowledge to other users, both through formal and informal mechanisms. Formal mechanisms include the introduction of procedures and practices in the work projects and assignments. Informal mechanisms include discussions with peers and through personal interactions. Once these innovations gain traction and become widely acceptable, they will call for re-definition of standards and customizations to the revised standards will follow. **Linking the Individual and Organization Dynamics** The process model displayed in figure 1 is interesting in the fact that it appreciates the role played the individual technology users, his/her group, and the organization at-large. As noted in the diagram, the sophistication of user modifications to technology increases as the individual and organizational experiences with the technology deepens. This by itself is not likely be a novel finding, however, the manner in which increase in experiences of the three entities (individual, group, and organization) affect the kinds of modifications is interesting. As the technology is first introduced to the users of an organization or a user decides to experiment with new technology, many a times, users are left on their own to figure out how to get it operational to meet immediate needs. Once operational, we see the emergence of flexibility acts to make the technology more suited to the user. The stages of operability and flexibility are largely dominated by individual user decisions and preferences. The role played by the users’ local group or the organization at-large is minimal. The reason for this is the fact is, just like the user, the rest of the organization is still grappling with how to use the technology. As such there is not much knowledge in-house to help individual users. Over time and with experience, individual users become sophisticated and comfortable with the technology, and the use of the technology increases in the organization. Soon, conflicts will arise. This is because there will be lack of compatibility and synchronization in how the technology is used. Economics dictates that it is in best interest of everyone to develop standards. The development of standards can be top-down or emergent. In highly distributed organizations, we postulate, standards are likely to emerge from the bottom up, this is because of the lack of a dominating authority and the differences in technology usage across the various centers. As pointed out by Desouza and Evaristo (2003) organizations can choose knowledge management standards to emerge from the local offices to the regional centers, the headquarter plays the role of the integrator and manages the various standards. In organizations that are centralized in nature, it is reasonable to expect that standards will be pushed down from the top. A good example here is standard development at Defense installations like the Army or the Navy. Standards are calibrated by a dominant group in the organization, for example a division in-charge of communication may develop communication protocols. These standards are then enforced through out the organization. Regardless of whether standards are developed top-down or bottom-up, there must be enough of critical mass in terms of active users to justify the investment in standards. The development of standards, involves the individual user, his/her local group, the organization at-large. Adaptability will need to occur to meet the standards. Rationally speaking, standards are not updated in real-time or on a regular basis. Standards are slow to change as it is costly and a resource intensive effort. As such, users seldom stop at the adaptability stage. Users will continue with use of the technology, and continued use will lead them to discover shortcomings with the artifact. They will then engage in acts of exaptation to meet their needs. The exaptation level is where the difference between experts and regular users starts to become clear. Not all users will engage in acts of exaptation. At the exaptability stage, it is critical that an organization have mechanism to connect regular users, their groups, and the organization at-large with the experts who modify the technology. Unless this occurs the organization’s experience with the technology may not grow effectively, the experts will increase their personal stock of experience and may use the technology in more effective manners, while the rest of the organization will be struggling with shortcomings and will attempt to work on problems for which a solution already exist (with the experts). If the organization is able to tap into the exaptations conducted by the experts, these can be evaluated by the communities such as the local group the expert belongs to or from the members of the extant organization. If the modifications are found to be suitable, they can be diffused through out the organization and standards in place can be updated, this being the best scenario for both the individual user and the organization. Given enough time, usage, and exposure to the technology the individual and/or organization are bound to reach a stage of agility. Organizations that are successful in knowledge sharing and innovation diffusions will become agile due to innovations by individual users and their associated adoption, assimilation, and diffusion in the organization. Less successful organizations may find differences in the knowledge possessed by their users about the technology artifact. There will be “experts” who can work with the technology in an agile manner and the “rest” who have limited knowledge about the technology and its capabilities. This situation will not be ideal for the organization as conflicts in the use of the technology are bound to occur between the two sets of users. DELTA, the research site, designated one day as “Show-Me-Day”. Show-Me-Day was a half-day event that took place once every six weeks and consisted of presentations made by software engineers to their peers. Engineers who had customized, modified, or invented add-ons to the IDE were asked to make brief presentations to showcase their work. These presentations worked as a means to infuse new knowledge into the software engineering community and help stimulate further discussions, critiques, and collaborations on modifying the IDE for the effective conduct of work. This is an ideal way we see both the organization and individuals interacting for innovations with technology artifacts. To summarize, users do not modify technology artifacts in isolation from the rest of the organization. As discussed above there is a rich inter-play of dynamics between individuals, groups, and the organization in how technology modification is conducted. 5. ON GOING WORK, PRACTITIONER AND RESEARCH IMPLICATIONS Before discussing the on going work and conclusions, we must acknowledge the limitations of the work. First, the above model is one possible explanation to the stages, which users follow in modifying artifacts. It is by no means the only one, variations can and will exist. As we have briefly alluded to, for some organizations, adaptation may be the first phase. Users will be asked to adapt to the existing standards from the beginning. This is plausible in organizations that have a mature history of information systems development and rigid standards in place, users who join the organization will be asked to conform to these standards. Depending on the rigidity of the technology, acts of modification for exaptability may not be possible. For instance, if the technology is rigid and organizational constraints and regulations prevent user manipulation to its architecture, acts of exaptability will not occur. As we continue our research, we hope to uncover the situations under which the process model presented here works, and also the situations where we may have variants. Second, we have not discussed the concept of repeated feedback loops here. Modification to technology is not a straightforward linear process. Rather, it is one of cycling between the stages and through repeated feedback. Due to space limitations we have not discussed these findings here. However, we would like to acknowledge they exist and are important. For example, once an expert comes up an exaptated way to use the technology and this insight is diffused through out the organization. The user community, with the exception of the expert, may have to start working with the modification at the operability stage. Working at the operability stage, and then moving through acts of flexibility will help the users get a better appreciation for the modified technology artifact, and they may be able to even improve the modification further. Third, we have been limited in presenting our findings for examination of practices in one organization. We understand and acknowledge the issues associated with generalizing our findings. We are also limited in the generalization of our findings about user modifications with technology artifacts outside the software engineering domain. Third, our findings here must be viewed in light of our research sample – novice software engineers. While we do feel that novice engineers have a lot of characteristics of the general end-user population, and hence our findings may apply to general end-users of technology artifacts, we must test our findings in new samples to gain more support. A possible strategy for future research might be to see how information systems students interact with programming environments such as and IDE during an introductory programming course. Findings from such an investigation may refute or support our tentative claims. We view the work presented here as on-going and not completed. To the best of our knowledge, this is one of the only study in the IS literature, with the exception of the work on adaptative structuration, that focuses on understanding the process by why technology is modified by users. It is our goal to complete the validation of our findings by the time of the conference. We plan to elicit data from two more software organizations, one based in Europe and one in Asia. Gathering data from these sites, will allow us to see if there are distinctions in how technology is modified between geographic locations. Cusumano (2004), in his study of the Business of Software, noted that software is managed differently in the United States, Europe, and Japan; we will investigate if differences exist in terms of propensity to modify technology artifacts. The present study has implications for practitioners. In other fields of product development, users are taking more responsibility for product design efforts. We believe that paying attention to how users modify technology artifacts is a viable first step to begin pushing design issues outside the IS organization and to the customer. Customers after all have a better understanding of their requirements than an outsider such as the IS function. We must hence resist the temptation to guess what the user wants, and give them the opportunity to construct their own innovations and products. In order to do this, we must change the way we design software and information systems. Designers must not focus on building an all encompassing system with all the bells and whistles; rather they must exert effort in building a stable environment and workspace with tools. Users can then use the tools to modify the technology as needed to meet their needs. Practitioners, especially those in the software development business, must take a more active role in involving lead users into the design and development cycles. These users represent a viable source of foresight and know-how that is waiting to be tapped into. A handful of organizations have begun to host “user conferences”, these are forums that bring together lead users of their technologies, in the hopes of stirring discussions and generating innovative ideas. The work here has significant research implications too. First, the study can be used as a building block towards understanding how users innovate with technology. Currently, not much attention has been focused on this issue. Technology is become ubiquitous and pervasive (Lyytinen and Yoo, 2002). While we may not completely understand why do users adopt technologies, the current IS literature has more than adequately researched this question, we must now focus on the more important question – “what do individuals do with the technology after adoption”. This study has provided tentative answers to this question. We encourage researchers to examine the work in product development, design studies, and engineering management as a guidance as to how might we better understand user innovation with technology. This paper also alluded to the concept of “experimentation”. The concept of “experimentation” has a rich history in the fields of problem solving and learning, using this work as a foundation can allow us to appreciate the iterative nature by which users interact with technology. Users go through the process of trial-error until a successful solution is found. In the case of user innovation with technology, the concept of “experimentation efficiency” as defined by Thomke (1998), “the economic value of information learned during an experimental cycle, divided by the cost of conducting the cycle”, is salient. When an experiment is costly and the incremental value of information learned is small, the experimental efficiency is low. Experimentation efficiency is not a static value; it is dynamic and will change during the process of experimentation. As users become more sophisticated in their usage of technology we can expect their experimentation efficiency to increase, this has implications on how they might move through process model discussed in this paper. Lastly, the study of user modifications with technology has bearings on research on IS standards. Today, we have a proliferation of collective systems for design and development of software such as the many instances of Open Source Development. Here standards are set as to what is acceptable behavior by the user group. These standards emerge from the bottom-up rather than being imposed top-down by a governing body. One argument on why these standards emerge from the bottom-up is that these communities are composed of skilled knowledge workers. These knowledge workers like to be in control of their knowledge and more importantly, like to control how it is used to better technology. Unless we understand how is technology personalized, customized, and exapted by users, we will not be able to truly appreciate the emergence of standards. In summary, we have described the various types of modifications conducted by users on technology. In addition, we have proposed a process model to link these modifications. We have also provided a rich array of practitioner and research implications. It is our hope, that the paper has opened up an avenue for interesting discussions within the IS community. 6. REFERENCES
{"Source-Url": "http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1002&context=digit2004", "len_cl100k_base": 12605, "olmocr-version": "0.1.50", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 59139, "total-output-tokens": 16288, "length": "2e13", "weborganizer": {"__label__adult": 0.0003743171691894531, "__label__art_design": 0.0009860992431640625, "__label__crime_law": 0.0003361701965332031, "__label__education_jobs": 0.01251220703125, "__label__entertainment": 0.00013148784637451172, "__label__fashion_beauty": 0.0002110004425048828, "__label__finance_business": 0.002155303955078125, "__label__food_dining": 0.000335693359375, "__label__games": 0.0006661415100097656, "__label__hardware": 0.0011682510375976562, "__label__health": 0.0005202293395996094, "__label__history": 0.0005173683166503906, "__label__home_hobbies": 0.0002002716064453125, "__label__industrial": 0.0004925727844238281, "__label__literature": 0.0006723403930664062, "__label__politics": 0.0003025531768798828, "__label__religion": 0.0004346370697021485, "__label__science_tech": 0.06829833984375, "__label__social_life": 0.0001970529556274414, "__label__software": 0.01904296875, "__label__software_dev": 0.88916015625, "__label__sports_fitness": 0.0002701282501220703, "__label__transportation": 0.0005478858947753906, "__label__travel": 0.00022494792938232425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70826, 0.0268]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70826, 0.40344]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70826, 0.93535]], "google_gemma-3-12b-it_contains_pii": [[0, 863, false], [863, 1836, null], [1836, 3387, null], [3387, 5735, null], [5735, 8190, null], [8190, 10348, null], [10348, 12838, null], [12838, 15228, null], [15228, 17568, null], [17568, 20363, null], [20363, 22897, null], [22897, 25329, null], [25329, 27738, null], [27738, 30203, null], [30203, 32562, null], [32562, 34656, null], [34656, 35978, null], [35978, 38290, null], [38290, 40646, null], [40646, 43096, null], [43096, 45525, null], [45525, 47812, null], [47812, 50068, null], [50068, 52451, null], [52451, 54704, null], [54704, 57080, null], [57080, 59403, null], [59403, 61874, null], [61874, 64665, null], [64665, 68403, null], [68403, 70826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 863, true], [863, 1836, null], [1836, 3387, null], [3387, 5735, null], [5735, 8190, null], [8190, 10348, null], [10348, 12838, null], [12838, 15228, null], [15228, 17568, null], [17568, 20363, null], [20363, 22897, null], [22897, 25329, null], [25329, 27738, null], [27738, 30203, null], [30203, 32562, null], [32562, 34656, null], [34656, 35978, null], [35978, 38290, null], [38290, 40646, null], [40646, 43096, null], [43096, 45525, null], [45525, 47812, null], [47812, 50068, null], [50068, 52451, null], [52451, 54704, null], [54704, 57080, null], [57080, 59403, null], [59403, 61874, null], [61874, 64665, null], [64665, 68403, null], [68403, 70826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70826, null]], "pdf_page_numbers": [[0, 863, 1], [863, 1836, 2], [1836, 3387, 3], [3387, 5735, 4], [5735, 8190, 5], [8190, 10348, 6], [10348, 12838, 7], [12838, 15228, 8], [15228, 17568, 9], [17568, 20363, 10], [20363, 22897, 11], [22897, 25329, 12], [25329, 27738, 13], [27738, 30203, 14], [30203, 32562, 15], [32562, 34656, 16], [34656, 35978, 17], [35978, 38290, 18], [38290, 40646, 19], [40646, 43096, 20], [43096, 45525, 21], [45525, 47812, 22], [47812, 50068, 23], [50068, 52451, 24], [52451, 54704, 25], [54704, 57080, 26], [57080, 59403, 27], [59403, 61874, 28], [61874, 64665, 29], [64665, 68403, 30], [68403, 70826, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70826, 0.13851]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
9e4d34dc1924fe2f4af42441c5b28e208a5cc372
Can We Run in Parallel? Automating Loop Parallelization for TornadoVM Rishi Sharma∗ IIT Mandi India rishi-sharma@outlook.com Shreyansh Kulshreshtha† IIT Mandi India shreyanshkuls@outlook.com Manas Thakur IIT Mandi India manas@iitmandi.ac.in Abstract With the advent of multi-core systems, GPUs and FPGAs, loop parallelization has become a promising way to speed-up program execution. In order to stay up with time, various performance-oriented programming languages provide a multitude of constructs to allow programmers to write parallelizable loops. Correspondingly, researchers have developed techniques to automatically parallelize loops that do not carry dependences across iterations, and/or call pure functions. However, in managed languages with platform-independent runtimes such as Java, it is practically infeasible to perform complex dependence analysis during JIT compilation. In this paper, we propose AutoTornado, a first of its kind static+JIT loop parallelizer for Java programs that parallelizes loops for heterogeneous architectures using TornadoVM (a Graal-based VM that supports insertion of @Parallel constructs for loop parallelization). AutoTornado performs sophisticated dependence and purity analysis of Java programs statically, in the Soot framework, to generate constraints encoding conditions under which a given loop can be parallelized. The generated constraints are then fed to the Z3 theorem prover (which we have integrated with Soot) to annotate canonical for loops that can be parallelized using the @Parallel construct. We have also added runtime support in TornadoVM to use static analysis results for loop parallelization. Our evaluation over several standard parallelization kernels shows that AutoTornado correctly parallelizes 61.3% of manually parallelizable loops, with an efficient static analysis and a near-zero runtime overhead. To the best of our knowledge, AutoTornado is not only the first tool that performs program-analysis based parallelization for a real-world JVM, but also the first to integrate Z3 with Soot for loop parallelization. 1 Introduction With the onset of multicore and heterogeneous systems over the last two decades, several programming languages were enriched with various ways to write concurrent programs that could reap out the benefits of the available hardware. Few languages provide inbuilt facilities to fork and launch multiple threads, whereas others support writing concurrent programs with extensions or libraries. At program level, writing complex computations invariably involves iterating over large data sets in loops. However, languages such as Java, though provide an in-built ability to write multithreaded programs, do not allow the programmer to directly mark loops for parallelization. One of the useful developments in this space has been the design of TornadoVM [9]. TornadoVM is a Java virtual machine (JVM) that extends OpenJDK and GraalVM [11] with a facility to parallelize for loops across heterogeneous architectures. TornadoVM allows programmers to annotate @Parallel constructs for loop parallelization back-ends for different architectural components, but also enriches Java with a facility to express parallelization opportunities at loop-level. However, identifying which loops can be parallelized is a non-trivial problem and involves sophisticated program analyses for even trivial programs (those involving accesses to arrays with indices being affine functions of loop variables). As an example, consider the Java code snippet shown in Figure 1. In order to determine if the for loop in function array_sq can be parallelized without changing the semantics of the program, we need to (i) extract the array indices at statements 4 and 5; (ii) find out if the indices may access the same location in a conflicting manner across iterations; and (iii) check if the call to the function elem_sq may cause any side-effect. This translates to performing dependence analysis to find constraints under which a loop can be parallelized, solving the identified constraints (possibly using a 1 Figure 1. A Java code snippet to demonstrate the analyses required for loop parallelization. constraint solver), pointer analysis to identify aliases and to resolve method calls, and an interprocedural purity analysis in the VM. However, performing multiple such precise interprocedural analyses in a JVM, where the program-analysis time directly affects the execution time of a program, is prohibitively expensive. In this paper, we present a tool named AutoTornado that addresses all the challenges listed above: it performs precise analysis of Java bytecode and marks loops for parallelization in TornadoVM, with negligible overhead during program execution. AutoTornado first performs dependence analysis over Java programs, in the Soot framework [24]. The dependence analysis generates constraints (over iteration variables) that encode conditions under which a given candidate loop can be parallelized. AutoTornado then feeds these constraints to the Z3 [8] theorem prover (which we have integrated with Soot), to determine if the constraints are satisfiable. In case the loop additionally makes some function calls, AutoTornado also analyzes the called function(s) for purity. Finally, if there are no dependences found by Z3, and if the loop does not call any impure functions, then the given loop can be marked for parallelization. Inspired by the PYE framework [23] (which proposes ways to interface analyses across static and JIT compilers), AutoTornado stores and conveys information about such parallelizable loops to our modified TornadoVM, which simply uses AutoTornado-results to parallelize the identified loops during program execution. The key contribution of our approach is to enable complex dependence- and purity-analysis based automatic loop parallelization for a VM, without spending much time in the VM. To evaluate the efficacy of this approach, we compared the precision of AutoTornado-identified parallelization opportunities with manually marked loops, for 14 benchmarks from a Java version of the PolyBench suite [18] (part of the TornadoVM repository). We found that AutoTornado successfully marks 61.3% of loops for parallelization, and even identifies few loops that were not identified manually. We also measured the speed-ups achieved due to AutoTornado-identified parallel loops, and found that it is significant (3.99x, on average, across all the benchmarks under consideration). In order to evaluate the trade-off between storing additional static-analysis results and performing expensive analyses in the VM, we computed the space overhead of our result files and the analysis time spent in Soot+Z3. The results show that AutoTornado enables loop parallelization in TornadoVM with a small storage overhead (10.2% of class files, which can be further reduced by adding code annotations in class files themselves), negligible run-time overhead (2.85% of the execution-time of benchmarks), and that performing those analyses in the VM would have been prohibitively expensive (57.58% of the total execution-time itself). To the best of our knowledge, AutoTornado is not only the first tool that performs program-analysis based parallelization for a real-world JVM, but is also the first to integrate Soot with Z3 for loop parallelization. The purity analysis on its own is a Soot enhancement for recent Java versions and is usable as an add-on by other analyses and frameworks. Additionally, the idea of carrying static-analysis results to a Java runtime in itself is novel enough and presents interesting engineering challenges; this manuscript traverses our exploration of the design space and describes our solutions, throughout the presentation. We thus believe that in the vast research span that focuses on parallelization, our work is a significant step in solving the associated challenges for languages with managed runtimes such as Java. The rest of the paper is organized as follows. Section 2 introduces the tools and analyses that we use in our work. Section 3 gives an overview of the architecture of our proposed approach (AutoTornado) and elaborates on each of the analysis components. Section 4 describes the changes that we do in TornadoVM to support reading static-analysis results, and Section 5 presents few design choices that we made. Section 6 evaluates our tool with respect to the precision of AutoTornado-marked annotations, the overhead in terms of storage and runtime, and the speed-ups achieved with AutoTornado-parallelized loops. Finally, Section 7 discusses few related works and Section 8 concludes the paper. 2 Background This section introduces few preliminary concepts, tools, and analyses that are used throughout the paper. 1. Loop-level dependencies and parallelization. Figure 2 shows a code where each iteration swaps adjacent array elements by using a local temporary variable temp. Read-after-write or write-after-read dependence occurs when an iteration of the loop reads from a memory location and another iteration tries to write to the same memory location. Write-after-write dependence occurs when two separate loop iterations write to the same memory location. The write to \( ar[i] \) from iteration \( i \) and the read of \( ar[i-1] \) from iteration \( i+1 \) on line 5 imply a read-after-write dependence. Similarly, the writes to \( ar[i] \) on line 5 from iteration 1 and \( ar[i-1] \) on line 6 from iteration 1+1 imply a write-after-write dependence. As the memory location is mutated in both of the cases, these dependences can lead to incorrect ``` 1 public void swap(int[] ar) { 2 int n = ar.length; 3 for (int i = 0; i < n; i++) { 4 int temp = ar[i]; 5 ar[i] = ar[i-1]; 6 ar[i-1] = temp; } } ``` Figure 2. Code to swap adjacent array elements. results if the iterations are run in parallel. There is another benign dependence that we can ignore: read-after-read dependence, where two different iterations read from the same memory location. As no memory location is mutated, parallelization poses no threat. Apart from not having non-benign dependencies, in order to be parallelizable, a loop must not perform any operation (either directly or through a function call) that may lead to side-effects. 2. Soot [24] is a popular framework for analyzing Java bytecode. The framework is written in Java and provides different intermediate representations (IRs) for representing the Java code. We use the Jimple IR for analysis, which is a typed three-address representation. We also use Soot’s points-to analysis Spark [15] for constructing call-graphs and for deriving alias relationships among references. 3. TornadoVM [9] is a plug-in to OpenJDK and GraalVM (standard Java runtime environments) that allows programmers to automatically run Java programs on heterogeneous hardware. It works by reconfiguring applications, at runtime, for hardware acceleration based on the currently available hardware resources. TornadoVM currently targets OpenCL-compatible devices and executes code on multi-core CPUs, dedicated and integrated GPUs, and FPGAs. TornadoVM can parallelize canonical for loops whose index variable and increment operation follow certain conditions; the loops to be parallelized in turn need to be annotated as @Parallel. 4. Z3 [8] is an SMT solver and theorem prover developed by Microsoft Research. SMT or Satisfiability Modulo Theories problems are decision problems for logical formulae with respect to combinations of theories such as arithmetic, bit-vectors, arrays, and uninterpreted functions. Z3, being a very efficient SMT solver, is widely used for solving problems that arise in software verification and analysis. 3 AutoTornado In this section, we first illustrate the architecture of AutoTornado, and then describe each of its modules. While describing each module, we first motivate the need for the module, followed by our proposed solution. Figure 3 shows how AutoTornado identifies parallelizable loops and conveys this information to TornadoVM. Given Java bytecode, AutoTornado first identifies canonical for loops (candidates for parallelization) and then performs dependence analysis over the loop bodies, along with purity analysis over called functions. The dependence constraints are fed to the Z3 theorem prover, which helps in classifying which loops can be parallelized without changing the underlying semantics. Finally, AutoTornado generates annotations that can be supplied to the TornadoVM runtime and adds support to the VM (see Section 4) to parallelize loops identified by our static analysis during execution. 3.1 Identifying Canonical Loops Java code is first compiled to Java bytecode which then runs on the JVM. Soot accepts this bytecode as input and converts it into Jimple. However, Java bytecode and Jimple do not have syntactic constructs like for loops. Figure 4 shows a simple for loop that doubles array elements in Jimple representation. As can be observed, identifying the loop, iteration variable, lower-bound, upper-bound, and increment poses a challenge in the Jimple representation. We standardize the domain of loops that we handle to loops identified as canonical according to the Algorithm 1. The loops identified have the following properties: (a) constant lower bound; (b) private iteration variable; (c) linear, positive and constant increment to the iteration variable; (d) condition of the form \( i \{ < | \leq | > | \geq \} u \), (e) as canonical loops, (f) iteration variable only modified in the update statement, and (g) single exit. Algorithm 1 Algorithm to identify canonical loops. ```plaintext 1: procedure isCanonical(l) 2: head ← l.getHead() 3: cond ← l.getCondition() 4: back ← l.backJumpStatement() 5: upd ← back.predecessor() 6: init ← head.predecessor() 7: lb ← init.rhs() 8: if back ≠ l.statements.last() then \(\Rightarrow\) Backjump should be last 9: Return false 10: else if upd.type() ≠ Assignment then 11: Return false 12: iter ← upd.def() \(\Rightarrow\) The variable updated 13: if iter.type() ≠ Local then 14: Return false 15: if iter.type() ≠ Assignments OR init.def ≠ iter then 16: Return false 17: if cond.values.contains(iter) then \(\Rightarrow\) Unsupported condition 18: Return false 19: compOp ← cond.function() \(\Rightarrow\) Compare operator 20: if compOp.oneOf \(\{ (\leq, \leq, \geq) \} \) then 21: Return false \(\Rightarrow\) Unsupported condition 22: ub ← cond.use.extractUpperBound(iter) 23: if upd.rhs.type() ≠ AddExp then \(\Rightarrow\) Linear, positive inc 24: Return false 25: inc ← upd.rhs.extractIncrement(iter) 26: if !IntConst.All(iter, lb, inc) then \(\Rightarrow\) Required constants 27: Return false 28: if Assigned(iter, l.statements \(\setminus\) upd) then \(\Rightarrow\) Iteration variable modified 29: Return false 30: if l.hasBreak() then \(\Rightarrow\) break statement in loop 31: Return false 32: Return iter, lb, ub, inc, init, upd ``` Soot provides a toolkit that returns a list of the loops inside a given function. We pass each such loop \( l \) to the procedure isCanonical. If isCanonical\( (l) \) returns false, the loop \( l \) is ignored and not parallelized. If the loop is indeed canonical, we extract the iteration variable (iter), lower bound (lb), upper bound (ub), increment (inc), the init statement (init) and the update statement (upd), and store the same for later use. ### 3.2 Variable scoping Local variables and non-local variables need to be handled separately for dependence analysis. In a parallel runtime, variables in the loop that are locally scoped are private and allocated on the stack of the running thread. On the other hand, non-local variables are shared across threads. Therefore, we must be able to differentiate local variables from their non-local counterparts. However, unfortunately, Java bytecode does not contain information about variable scopes. Scoping information can be derived indirectly from the local variable table in the bytecode, if the source code is compiled with the -g flag. For each variable in the function, the local-variable-table entry in the class file contains the bytecode index of its initialization, its size, and the length in bytes for which it stays live. We directly use ASM (the front-end used by Soot) to read the class file and return the local-variable-table entry corresponding to each variable. Procedures in Algorithm 2 checks if the given var is local. A variable is local to the loop if and only if its bytecode index and length are within the bounds of the bytecode indices of init and upd. Jimple’s temporary variables prefixed with a $ are always considered local to the loop. For each canonical loop, we generate the set of local variables using the procedure getLocalVars in Algorithm 2. ### 3.3 Dependence Analysis on Scalars and Fields Being able to discriminate between local and non-local variables in the loop, along with the fact that local variables can be accommodated on the thread’s local stack on parallelization, helps parallelize more loops when compared to the conservative case. Figure 5 shows a loop writing to a local as well as non-local variable. As the variable c at... To make the array-references easier to work with, we separate them into three sets, namely arReads (set of array-read references), arWrites (set of array-write references) and arRefs (arWrites ∪ arReads). Dependence analysis for arrays is then done in the following three phases: (i) alias analysis, (ii) constraint generation, and (iii) running Z3. Only those references that point to the same array object can have a mutual dependence. Therefore, alias analysis is crucial for improving the precision of the dependence analysis. We generate the constraints for pairs of elements from arWrites and arRefs if and only if their array objects alias. Here, we identify the aliases using the GeomPTA and Spark pointer analyses provided by the Soot framework. To identify if there is a dependence between the given pair of aliasing array references, the minimum constraint is that the array reference indices should not be equal over two different iterations. However, this would be meaningless without capturing the program states in logic; each variable may be assigned results from complex computations. Considering all these factors, we now describe how we generate the set of constraints; see Algorithm 3. Note that we limit the types of values on the right-hand side of the assignment statements that occur in the def-use chain of the indices to IntConstant, Local, JAddExpr, JMulExpr, and JSubExpr, denoting common integer-arithmetic expressions for forming indices in parallel programs. From now on, we use \( op \in \{+, -, \ast\} \) to represent a supported operator. **Algorithm 3** Algorithm to generate constraints. ``` 1: procedure N(y, i) 2: if y ∈ localVars then Return \( y^i \) \( \Rightarrow \) y in iteration i 3: Return y 4: procedure stmtC(s, i, iterVs, lbs, ubs, l) 5: if s is IdentityStmt then \( \Rightarrow \) The values come from Parameters 6: Return true 7: else if s: y ← (...) AND y ∈ iterVs then \( \Rightarrow \) Other loop’s iter 8: lbCur ← N(y, i) ≥ lbs[y] 9: ubCur ← N(y, i) < ubs[y] 10: cU ← \( ∨_{\text{defDef}(ubs[y], i, \text{head})} \text{stmtC}(d, i, iterVs, lbs, ubs, l) \) 11: Return lbCur AND ubCur AND \( cU \) 12: if s: y ← k AND k is IntConst then 13: Return N(y, i) == k 14: else if s: y ← x1 op x2 AND x1, x2 are scalars then 15: cX₁ ← \( ∨_{\text{defDef}(x1, s)} \text{stmtC}(d, i, lbs, ubs) \) 16: cX₂ ← \( ∨_{\text{defDef}(x2, s)} \text{stmtC}(d, i, lbs, ubs) \) 17: Return N(y, i) == (N(x1, i) op N(x2, i)) \& cX₁ \& cX₂ 18: procedure depC(w, r, l, iterVs, lbs, ubs) 19: c₁ ← \( ∨_{\text{defDef}(w, \text{index}, w, \text{stmt})} \text{stmtC}(d, 0, 0, iterVs, lbs, ubs, l) \) 20: c₂ ← \( ∨_{\text{defDef}(w, \text{index}, r, \text{stmt})} \text{stmtC}(d, 1, 0, iterVs, lbs, ubs, l) \) 21: lc ← \text{Literal} \, 1 \Rightarrow \text{Different iterations of loop} 22: cdep ← N(w.index) == N(r.index) \Rightarrow \text{The dependence} 23: Return c₁ \& c₂ \& lc \& cdep 24: procedure loopC(arWrites, arRefs, l, iterVs, lbs, ubs) 25: Return \( \bigvee_{\text{arWrites}} \) \& \( \bigvee_{\text{arRefs}} \) \& \text{depC}(w, r, l, iterVs, lbs, ubs) ``` The procedure LoopC in Algorithm 3 generates the set of array dependence constraints for each pair of elements from \(ar\text{W}rites\) and \(ar\text{Re}fs\) in a given loop \(l\), the set of iteration variables, lower bounds and upper bounds of all the loops of the function \(int\ \text{vars}, \text{lbs}\) and \(\text{ubs}\), respectively. The constraints are encoded such that if the set of constraints is satisfiable, there is a dependence. The procedure N returns a mapping from the set of program variables to a set of logical variables. If a variable \(y\) is non-local to the loop, there is an identity mapping. If the variable is local to the loop, \(y'\) denotes the logical variable for the program variable \(y\) in the \(i\)th iteration. The procedure STMTC recursively generates the constraints for the program logic, starting at a given statement. If the value of the variable comes from a parameter, we stop and return identity constraint \(true\). If the variable is the iteration variable of another (nested) loop, we constrain it to its lower-bound and upper bound. Lines 13-14 suggest that if the value assigned is an integer constant, the variable should be constrained to that integer value. Finally, lines 15-18 recursively generate the constraints for the operands on the right hand side. Procedure DepC generates the constraints for a given pair of elements from the sets \(ar\text{W}rites\) and \(ar\text{Re}fs\). Lines 22-23 enforce separate iterations and that the indices of the references be equal for a dependence. This procedure is called from LoopC, which generates the entire constraint set by taking the disjunction of the dependence constraints for each pair of \(w \in ar\text{W}rites\) and \(r \in ar\text{Re}fs\). In the absence of the value of a variable during static analysis, the constraints would be weaker, and hence, easier to satisfy. Therefore, AutoTornado takes a conservative approach in the absence of surety, maintaining soundness. Figure 6 shows a simple loop given to the program for dependence analysis. \(k1, k2\) and \(k3\) are locally-scoped, and \(f1, f2\) and \(f3\) can be one of the supported operators as mentioned above. Equation 1 shows the constraints generated for the array reference at line 6 with itself, for two separate iterations represented by the superscript: \[ \begin{align*} (k3^u & = f3(i^u, k2^u)) \land (k2^u = f2(i^u, k1^u)) \land \\ (k1^u & = f1(i^u)) \land (i^u \geq 0) \land (i^u < 10000) \land \\ (k3^v & = f3(i^v, k2^v)) \land (k2^v = f2(i^v, k1^v)) \land \\ (k1^v & = f1(i^v)) \land (i^v \geq 0) \land (i^v < 10000) \land \\ (i^u \neq i^v) & \land (k3^w = k3^v) \end{align*} \] (1) The generated constraints are passed to the Z3 Solver. If the Solver returns Status.UNSATISFIABLE, the indices in the array references cannot be equal in different iterations, thus deeming that the loop is free of dependencies and can be parallelized. Otherwise, the Solver was able to satisfy the constraints and the loop contains some dependence; the loop is not parallelizable in such a case. The amount of computation and memory required for the points-to-analysis and solving the constraints using Z3 enforces that the analyses in AutoTornado must not be done at runtime. Even though we would have much more information about the loops during JIT compilation, we would incur a high overhead on each run. This is the main reason we designed AutoTornado as a static analysis. 3.5 Supporting Calls to Pure Functions While dependences caused by scalars, fields and array references inside the loop body are taken care of by the analyses presented in preceding sections, analyzing function invocations inside loops is not straightforward. Such an analysis requires checking the variables and references in the invoked functions for dependences in context of the loop, in some sense, extending the above-mentioned analyses to external functions. This problem can be solved by checking if the function has references to non-local objects, i.e. if the function is pure, since purity will ensure that there are no dependences among different iterations of the loop, thus enabling the parallelization of loops containing function calls. Soot already provides an in-built purity analysis module, but in our preliminary testing we found it to be generating incorrect results for recent Java versions. Hence we have written a new interprocedural purity analysis that uses Soot’s points-to analysis [15] and call graph construction modules. Our purity analysis module analyzes the functions called within canonical for loops for purity, and provides the results in terms of two parameters: read impurity and write impurity. Read impurity implies that the function only accesses or reads non-local entities but does not modify the same, whereas write impurity implies that the called function mutates or writes to a non-local entity. A pure function should not modify the state that existed before its invocation [3]. A Java method can do so through static fields and variables, references passed as parameters, and function calls. We first mark functions that access static field references as impure. The remaining function calls are handled using the interprocedural part of the analysis, stating the caller function as also impure if the called function is impure. For the parameters, any object reachable from the parameter should not be accessed inside the function, else the function would be deemed impure. We use the points-to analysis provided by Soot, which establishes a relationship between the variables and the objects they point to, to get all the objects that are transitively reachable from the objects pointed to by the parameters. We maintain a list that contains all the local variables that can point to an external object. This is done by iteratively updating the list by adding the local variables that can alias with ones already in the list, and then also adding the ones that store the fields of external objects. Whenever any object referenced by a variable from this list is read, it indicates read-impurity, whereas if they are written then it indicates write-impurity. We store these results and use them to determine whether a function called from within a loop is pure or not. As an example, after extending the dependence analysis with purity analysis, AutoTornado can successfully mark the for loop in Figure 1 as parallelizable. 4 Runtime Support Performing the analyses shown in Section 3 inside TornadoVM during runtime would have simplified actually parallelizing the loop. However, performance concerns render such sophisticated analyses in Java virtual machines infeasible, which brings in the problem of communicating the analysis results from static analysis to TornadoVM. In this section, we describe how AutoTornado conveys static analysis results to TornadoVM, along with the support added in TornadoVM to use the conveyed results for loop parallelization. As TornadoVM requires the @Parallel annotation to be placed above parallelizable for loops in the Java source code and Soot works with Java class files, the straightforward solution would have been to insert these annotations in the bytecode itself. We refrained from using this approach because our analysis and parallelization are very sensitive to the bytecode indices and the local variable table generated after static compilation, and there is no one-to-one correspondence among the same between Java source code and bytecode. Even a slight change in the instructions while generating a new class file can be fatal to the program semantics. The safer solution for communicating the static analysis results of AutoTornado to TornadoVM runtime is to create a map AnnotationMap : signature → List(Annotations). Each annotation consists of the start, length and slot (in the stack frame) of the iteration variable of the parallelizable loop, looked up from the LocalVariableTable in the corresponding Java class file. signature denotes the bytecode signature of a Java method. This AnnotationMap is written to disk after AutoTornado returns and supplied to the VM (see Figure 3). Following is an example AnnotationMap for the code in Figure 6 with adapted functions f1, f2, f3: {< DepTest : foo(1)V >: [start : 2, length : 35, slot : 1]} When TornadoVM encounters a method call, it reads the existing @Parallel annotations from the classfile and adds all the annotations to a method-local data structure. We modified TornadoVM to read, in addition to the annotations in the classfile, the AnnotationMap generated by our static analysis, and extend the data structure maintained by TornadoVM by looking up the signature of the called method in the map. This solution bypasses the need to change the classfile, but is still able to communicate the results to the runtime. 5 Discussion In this section, we highlight few subtle aspects of the design decisions made while implementing AutoTornado, along with a discussion on the reasoning and the alternatives. 1. Static initializers. AutoTornado treats calls made to static initializers (classinit methods) as impure. We handle static initializers conservatively because it is difficult to predict when are they called during runtime (first reference to a class); marking them impure should not be an issue because in general they are used to assign values to static fields, which are essentially shared (global) variables anyway, thus leading to impurity for the enclosing loop. 2. Annotating class files with static-analysis results. As mentioned in Section 4, we have chosen to include static-analysis results in separate files (containing the AnnotationMap), for simplicity. In a production scenario, the results can be added as annotations in the class-files themselves, without loss of generality, depending on the correctness of the support to maintain local-variable offsets in Soot and ASM. We leave this as a future engineering exercise. 3. Validating static-analysis results. For an approach that uses static-analysis results to perform VM-level optimizations without programmer intervention, one of the ways to assess the precision and the usefulness would be to validate the results through a parallel-programming expert. In this paper, we validate the precision and correctness of AutoTornado-results by checking whether the loops identified as parallelizable are a subset of the manually annotated loops by TornadoVM designers, and the usefulness by measuring the speed-ups achieved with the parallelized loops (see Section 6). A future work could be to export AutoTornado as an IDE plugin that suggests parallelizable loops to programmers who can either accept or reject the suggestion; we mark this as an interesting software-engineering exercise. 6 Implementation and Evaluation We have implemented the four static components of AutoTornado (highlighted in gray in Figure 3), in the Soot framework [24] version 4.1.0, over its simple intermediate representation, with different modules implemented independently (thus being candidates for separately useful artefacts as well). The integration for constraint solving was done with Z3 theorem prover [8] version 4.8.11. We have added the runtime support code to TornadoVM version 0.11, and ran our experiments on an 11th Gen Intel Core i5 machine, bundled with an Intel Iris Xe Graphics chip. We have evaluated our tool on 14 benchmarks from the PolyBench suite [18], adapted to Java by the TornadoVM team [9] itself. The parallelizable loops in all these benchmarks are already annotated with @Parallel constructs, thus providing us a baseline for evaluating the precision and correctness of the loops identified as parallelizable by AutoTornado. Additionally, we have also evaluated our techniques on a series of synthetic benchmarks, written specifically to test individual cases that AutoTornado parallelizes, as well as to illustrate cases which pose challenges for further automatic parallelization. We plan to release our complete implementation, bundled with all the testcases, to the community as open source. AutoTornado can not only be used as a bundle tool to parallelize loops for TornadoVM, but its individual components (particularly the dependence analysis, the purity analysis, and the Soot-Z3 integration modules) can separately be used to develop various other Soot-based program analyses as well. We now present an evaluation to study the impact of our tool to support loop parallelization for TornadoVM. In particular, the next four subsections address the following four research questions, respectively: • **RQ1.** How many of manually parallelized loops are marked as parallelizable by AutoTornado? • **RQ2.** Are the overheads with respect to the static-analysis time, the storage for AnnotationMap, and the time spent in the VM significant? • **RQ3.** How good are the speedups of AutoTornado-marked loops in TornadoVM? • **RQ4.** What are the challenges yet to be handled by future static-analysis guided loop parallelizers? ### 6.1 AutoTornado precision <table> <thead> <tr> <th>Name</th> <th># loops</th> <th># annot</th> <th># Id</th> <th>tAnalysis (s)</th> <th>szClassfile (B)</th> <th>szAnnotMap (B)</th> <th>tParallel (ms)</th> <th>tRead (ms)</th> </tr> </thead> <tbody> <tr> <td>Convolution2D</td> <td>2</td> <td>2</td> <td>1</td> <td>4527</td> <td>391</td> <td>1546.23</td> <td>7.94</td> <td></td> </tr> <tr> <td>Euler</td> <td>5</td> <td>2</td> <td>1</td> <td>4767</td> <td>497</td> <td>191.03</td> <td>8.22</td> <td></td> </tr> <tr> <td>FDTD Solver</td> <td>6</td> <td>2</td> <td>1</td> <td>7609</td> <td>886</td> <td>429.93</td> <td>7.56</td> <td></td> </tr> <tr> <td>FlatMapExample</td> <td>2</td> <td>1</td> <td>1</td> <td>3815</td> <td>389</td> <td>638.05</td> <td>20.72</td> <td></td> </tr> <tr> <td>GSeidel2D</td> <td>2</td> <td>2</td> <td>0</td> <td>4592</td> <td>0</td> <td>59.35</td> <td>0</td> <td></td> </tr> <tr> <td>Hilbert Matrix</td> <td>2</td> <td>2</td> <td>1</td> <td>3218</td> <td>390</td> <td>863.65</td> <td>12.64</td> <td></td> </tr> <tr> <td>Jacobi1D</td> <td>2</td> <td>2</td> <td>1</td> <td>4785</td> <td>739</td> <td>418.06</td> <td>7.06</td> <td></td> </tr> <tr> <td>Jacobi2D</td> <td>4</td> <td>4</td> <td>1</td> <td>5222</td> <td>758</td> <td>70.66</td> <td>4.45</td> <td></td> </tr> <tr> <td>Mandelbrot</td> <td>3</td> <td>2</td> <td>0</td> <td>9186</td> <td>0</td> <td>13740.69</td> <td>0</td> <td></td> </tr> <tr> <td>MatrixMul2D</td> <td>3</td> <td>2</td> <td>2</td> <td>5304</td> <td>840</td> <td>50.73</td> <td>25.99</td> <td></td> </tr> <tr> <td>MatrixTranspose</td> <td>3</td> <td>2</td> <td>2</td> <td>4229</td> <td>391</td> <td>1031.58</td> <td>25.52</td> <td></td> </tr> <tr> <td>Montecarlo</td> <td>1</td> <td>1</td> <td>0</td> <td>3615</td> <td>0</td> <td>418.34</td> <td>0</td> <td></td> </tr> <tr> <td>Saxpy</td> <td>1</td> <td>1</td> <td>1</td> <td>3658</td> <td>371</td> <td>842.44</td> <td>12.41</td> <td></td> </tr> <tr> <td>SGEMM FPGA</td> <td>3</td> <td>2</td> <td>2</td> <td>3835</td> <td>388</td> <td>1058.39</td> <td>13.16</td> <td></td> </tr> <tr> <td><strong>GeoMean</strong></td> <td>2.30</td> <td>1.96</td> <td>1.22</td> <td>4647.22</td> <td>466.25</td> <td>479.48</td> <td></td> <td></td> </tr> </tbody> </table> Figure 7. Evaluation metrics. Out of the total number of loops (loops) and the manually annotated @Parallel loops (annot), AutoTornado identified loops are shown in the Id column. tAnalysis denotes the time taken by static analysis in seconds. szClassfile and szAnnotMap respectively denote the size of the benchmark classes and static-analysis results in bytes. tParallel and tRead respectively denote the total execution time and run-time overhead of our approach in milliseconds. the associated analysis cost. On the other hand, though performing analyses statically takes care of the complexity and practicality to a great extent, our approach incurs additional overhead in terms of conveying the `AnnotationMap` to, and adding runtime support to read the same in, TornadoVM. We assess the scale and impact of these overheads next. Column 5 in Figure 7 shows the total analysis time spent by AutoTornado, for all the benchmarks. This includes the time spent by Soot to construct control-flow- and call-graphs, as well as the time spent by Z3 in solving the AutoTornado-generated dependence constraints. We note that the time spent across different benchmarks varies between 1 and 4 seconds; this time is more or less of the same order for different benchmarks due to the similarity in their size, but we expect it to increase proportionally with the size of the benchmark. Nevertheless, the total analysis time is reasonable to be incurred statically (i.e. offline). Columns 6 and 7 respectively show the size (in bytes) of the classfiles of various benchmark programs (only the application) and the size of the AutoTornado-generated result files. We observe that the extra space incurred by our approach to convey precise static-analysis results to TornadoVM is very small (on an average 500 bytes), which is just 10.2% of the overall class-file size. This denotes that the results computed by our static analyses are small enough to be conveyed to the VM without much overhead. As explained in Section 4, we achieve this by storing the results in terms of mostly integer values (storing information in terms of bytecode indices in the class files). Column 8 shows the total execution time (in milliseconds) of each (parallelized) program under consideration. Similarly, column 9 shows the time spent by our modified TornadoVM for reading the `AnnotationMap` and using the results to mark loops as parallelizable. As can be noted, the runtime overhead of our approach is of the order of a few milliseconds, which is negligible compared to the amount of time that would have been needed for actually performing sophisticated program analyses in the JVM, as well as to the total execution time. Also note that AutoTornado enables program-analysis based parallelization of loops irrespective of the tiered VM component translating the program (interpreter or any of the JIT compiler(s)), whereas even the imprecise version of such analyses could have been performed only if the given method was picked up by a JIT compiler. ### 6.3 Achieved speed-ups The previous sections assert the precision and the efficiency of the loops parallelized by AutoTornado. We now assess the impact of loop parallelization itself, by comparing the execution times of the sequential and the AutoTornado-parallelized versions of various benchmarks, on TornadoVM. Figure 8 shows the speed-ups achieved by the parallel versions of the benchmarks under consideration. We note that the speed-ups go up to 23x (for MatrixMul2D), and on an average stand at 3.99x. We also noted slowdowns on few of the benchmarks, specifically FDTDSolver and MatrixTranspose, and suggest two ways to address the same. Firstly, the speed-ups may vary if the programs are executed on larger datasets and/or on higher-end systems. Second and more importantly, the slow-downs indicate that not all of the parallelizable loops are good candidates for parallelization; overheads with respect to communication (particularly TornadoVM’s target being heterogeneous systems consisting of GPUs and FPGAs that have high communication overheads, apart from CPUs) is an important factor. We have also observed that the imprecision affects mostly the outer loops (see Section 6.4), thereby increasing the overall overhead of parallelization when parallelizing inner loops. We envisage that future studies should adapt the approaches of Surendran et al. [21] to filter out loops that may not be good candidates for parallelization, for heterogeneous systems. ### 6.4 Challenges towards further parallelization The loops that are not parallelized by AutoTornado but can be labelled as parallelizable manually may be non-exhaustively categorized into the following two categories. 1. **Library function calls.** Even though we account for purity of function calls during analysis in AutoTornado, library functions are marked (conservatively) as impure. This means not parallelizing all such loops which even contain functions like `sqrt`; AutoTornado failed to parallelize `Montecarlo` because of this reason. We treated library functions as impure due to two reasons: First and more important, in a real-world scenario, the JDK installation on the target machine may be different from that available for static analysis, which may lead to invalidity of statically generated results. Second, including library calls in the analysis blows up the size of Soot’s call graph (due to its imprecision), thus making the analysis unsuitable for general-purpose machines with moderate compute capabilities. Approaches such as the PYE... 1 void hilbert(float[] output, int rows, int cols) { 2 for (int i = 0; i < rows; i++) { 3 for (int j = 0; j < cols; j++) { 4 output[i*rows+j] = 1.0/((i+1)+(j+1)-1); } 5 } 6 } Figure 9. Code for Hilbert Computation. framework [23] handle both these challenges by generating results conditional on the library methods, and the same can be incorporated, if deemed suitable for precision and scalability, by integrating AUTO-TORNADO into the same. 2. Unknown upper bounds during static analysis. Multiple loops in our evaluation set are not parallelizable because of the lack of the value of upper bound of the iteration variable during static analysis. As the upper bounds of loops are generally runtime values, the constraints given to the Z3 solver are weaker than they would be at run time. Figure 9 is an example of such a loop; as the values of rows and cols are missing during the analysis, the solver is able to find a dependence when \( j \geq rows \), for \( h_0 = 0 \) and \( l_1 = 1 \). Hence, the outer loop is not parallelized. In our evaluation, we found that about 83% of imprecision of AUTO-TORNADO was due to unknown upper bounds. In future, we plan to research how we can generate results in terms of conditions on loop bounds that can be efficiently resolved during execution. 7 Related Work Automating loop parallelization for achieving performance on multi-core (and recently, heterogeneous) systems has been studied for long [2], and its optimality for even simple loops has been proved to be undecidable [10]. In this section, we primarily focus on related works that propose significant advancements in performing (loop) parallelization using dependence and/or purity analysis. The precision of finding and solving dependence constraints directly affects the parallelization, and hence several dependence analysis algorithms have been proposed. Partition-based merging implemented in Parascope [7] works by separating the array into separable minimally-coupled groups. Merging direction vectors [1] is a very common dependence analysis algorithm used by various automatic parallelization tools such as Automatic Code Parallelizer [16]. Symbolic test and Banerjee-GCD test [4] are used to detect data dependence among array references by assuming the loop in normal form and the loop indices to be affine functions. TornadoVM in itself does not use dependence analyses, due to the overhead of performing these checks during program execution. Our dependence analysis is based on Z3, does not require indices to be affine functions (is able to solve non-linear arithmetic), and most interestingly is performed statically (that is, without incurring any analysis or checking overheads during program execution in the JVM). Constraint solvers such as Z3 have been extensively used for verification of programs. Bounded model checking [5] is a popular way of finding bugs in programs. Satisfiability modulo theory has been extended for verification of higher-order programs [6], and multi-threaded program verification [12]. Constraint satisfaction based techniques have also been used for quantification of information flow in imperative programs using a SAT-based QIF [14]. On the other hand, Pugh and Wonnacott [19] were among the first to propose the usage of constraint-solving for dependence analysis. Inspired by these prior works, in this work, we feed the constraints identified by our dependence analysis written in Soot to the Z3 solver, and marked loops for parallelization where the dependence constraints are satisfied. To the best of our knowledge, ours is the first approach that integrates Soot with Z3 for this purpose. Purity analysis [3, 20] identifies side-effect free functions, and is imperative for parallelization of otherwise dependence-free loops consisting of calls to functions that may be impure. Süß et al. propose a C extension [22] that marks pure function calls to support parallelization of polyhedral loops. AUTO-TORNADO implements a stand-alone purity analysis component (which works for recent versions of Java, unlike prior implementations in Soot), and hence naturally supports programs containing function calls inside Java for loops. Few prior works have proposed dependence analysis based loop parallelization with hybrid static+dynamic strategies. Oancea and Rauchwerger [17] use runtime information to improve the performance of static dependence analysis for FORTRAN. Recently, Jacob et al. [15] used staged dependence analysis while parallelizing Python loops on GPUs, to determine loop bounds and variable types that cannot be determined statically (Python being a dynamically typed language). Thakur and Nandivada [23], though for a different set of analyses, propose a static+JIT approach that statically encodes dependencies between Java application and libraries precisely as conditional values and resolving the same during JIT compilation. Our approach facilitates loop parallelization on a dynamic Java runtime, by offloading complex program analyses to static time, and can be extended using such hybrid strategies to improve the precision further. 8 Conclusion Loop parallelization, though one of the most promising ways to speed-up programs on multi-core and heterogeneous systems, requires performing several expensive analyses for automation. For a language like Java, where most of the program analysis happens during runtime in a VM (and thus affects the execution-time of programs directly), performing such analyses for loop parallelization not only presents several challenges in terms of integrating program analyses with constraint solvers, but may often also be prohibitively expensive. In this paper, we proposed an approach that solves this problem by performing the required analyses statically, and conveying the obtained results to a recent JVM that parallelizes loops for heterogeneous architectures. Our solution involved generating dependence constraints from Java bytecode, feeding them to a constraint solver, supporting calls to pure functions, generating results in a form that is valid in the Java runtime, and modifying the VM to support static-analysis guided parallelization. Our exposition describes the design decisions and implementation challenges along with our novel solutions in detail, and our tool AutoTornado is composed of several modules that can additionally be used for performing more such analyses in future. References
{"Source-Url": "https://arxiv-export-lb.library.cornell.edu/pdf/2205.03590", "len_cl100k_base": 10894, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42523, "total-output-tokens": 13374, "length": "2e13", "weborganizer": {"__label__adult": 0.0003426074981689453, "__label__art_design": 0.0002290010452270508, "__label__crime_law": 0.00028395652770996094, "__label__education_jobs": 0.00038313865661621094, "__label__entertainment": 5.429983139038086e-05, "__label__fashion_beauty": 0.000148773193359375, "__label__finance_business": 0.00018167495727539065, "__label__food_dining": 0.0003104209899902344, "__label__games": 0.0005950927734375, "__label__hardware": 0.0013570785522460938, "__label__health": 0.0004138946533203125, "__label__history": 0.0002218484878540039, "__label__home_hobbies": 8.26120376586914e-05, "__label__industrial": 0.0003995895385742187, "__label__literature": 0.0001780986785888672, "__label__politics": 0.0002770423889160156, "__label__religion": 0.0004498958587646485, "__label__science_tech": 0.0174102783203125, "__label__social_life": 6.479024887084961e-05, "__label__software": 0.004550933837890625, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0003361701965332031, "__label__transportation": 0.000598907470703125, "__label__travel": 0.00020682811737060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53393, 0.04996]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53393, 0.36882]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53393, 0.85648]], "google_gemma-3-12b-it_contains_pii": [[0, 4196, false], [4196, 9883, null], [9883, 15444, null], [15444, 17421, null], [17421, 20520, null], [20520, 26245, null], [26245, 32135, null], [32135, 36127, null], [36127, 41221, null], [41221, 46982, null], [46982, 53393, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4196, true], [4196, 9883, null], [9883, 15444, null], [15444, 17421, null], [17421, 20520, null], [20520, 26245, null], [26245, 32135, null], [32135, 36127, null], [36127, 41221, null], [41221, 46982, null], [46982, 53393, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53393, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53393, null]], "pdf_page_numbers": [[0, 4196, 1], [4196, 9883, 2], [9883, 15444, 3], [15444, 17421, 4], [17421, 20520, 5], [20520, 26245, 6], [26245, 32135, 7], [32135, 36127, 8], [36127, 41221, 9], [41221, 46982, 10], [46982, 53393, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53393, 0.07083]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
f182f51c49a3f68617f261cac26c4af1be31d232
[REMOVED]
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20010081324.pdf", "len_cl100k_base": 8617, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 37642, "total-output-tokens": 10341, "length": "2e13", "weborganizer": {"__label__adult": 0.0004107952117919922, "__label__art_design": 0.00047659873962402344, "__label__crime_law": 0.0004835128784179687, "__label__education_jobs": 0.0008826255798339844, "__label__entertainment": 0.0001246929168701172, "__label__fashion_beauty": 0.0002582073211669922, "__label__finance_business": 0.0003216266632080078, "__label__food_dining": 0.000396728515625, "__label__games": 0.0008969306945800781, "__label__hardware": 0.0034332275390625, "__label__health": 0.0006923675537109375, "__label__history": 0.0005159378051757812, "__label__home_hobbies": 0.00015854835510253906, "__label__industrial": 0.0013341903686523438, "__label__literature": 0.0002448558807373047, "__label__politics": 0.00043320655822753906, "__label__religion": 0.0007090568542480469, "__label__science_tech": 0.347900390625, "__label__social_life": 0.00011873245239257812, "__label__software": 0.01322174072265625, "__label__software_dev": 0.625, "__label__sports_fitness": 0.0006265640258789062, "__label__transportation": 0.00128173828125, "__label__travel": 0.00030541419982910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44612, 0.02682]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44612, 0.39709]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44612, 0.91804]], "google_gemma-3-12b-it_contains_pii": [[0, 3041, false], [3041, 6580, null], [6580, 10242, null], [10242, 12366, null], [12366, 15729, null], [15729, 19530, null], [19530, 22449, null], [22449, 24385, null], [24385, 27954, null], [27954, 30454, null], [30454, 33766, null], [33766, 36509, null], [36509, 39181, null], [39181, 42161, null], [42161, 44612, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3041, true], [3041, 6580, null], [6580, 10242, null], [10242, 12366, null], [12366, 15729, null], [15729, 19530, null], [19530, 22449, null], [22449, 24385, null], [24385, 27954, null], [27954, 30454, null], [30454, 33766, null], [33766, 36509, null], [36509, 39181, null], [39181, 42161, null], [42161, 44612, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44612, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44612, null]], "pdf_page_numbers": [[0, 3041, 1], [3041, 6580, 2], [6580, 10242, 3], [10242, 12366, 4], [12366, 15729, 5], [15729, 19530, 6], [19530, 22449, 7], [22449, 24385, 8], [24385, 27954, 9], [27954, 30454, 10], [30454, 33766, 11], [33766, 36509, 12], [36509, 39181, 13], [39181, 42161, 14], [42161, 44612, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44612, 0.03653]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0c5d60f47ac845c65b899c0929f00c69ae4ba6d7
[REMOVED]
{"Source-Url": "https://pure.tue.nl/ws/files/1812816/200010248.pdf", "len_cl100k_base": 12390, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 56182, "total-output-tokens": 14511, "length": "2e13", "weborganizer": {"__label__adult": 0.0003807544708251953, "__label__art_design": 0.0005135536193847656, "__label__crime_law": 0.0004925727844238281, "__label__education_jobs": 0.00115203857421875, "__label__entertainment": 0.00012290477752685547, "__label__fashion_beauty": 0.00021445751190185547, "__label__finance_business": 0.00041413307189941406, "__label__food_dining": 0.0004901885986328125, "__label__games": 0.0008749961853027344, "__label__hardware": 0.0018978118896484375, "__label__health": 0.0009617805480957032, "__label__history": 0.0004901885986328125, "__label__home_hobbies": 0.00020265579223632812, "__label__industrial": 0.0006618499755859375, "__label__literature": 0.000469207763671875, "__label__politics": 0.0004019737243652344, "__label__religion": 0.0006632804870605469, "__label__science_tech": 0.18896484375, "__label__social_life": 0.00012195110321044922, "__label__software": 0.00948333740234375, "__label__software_dev": 0.78955078125, "__label__sports_fitness": 0.0003676414489746094, "__label__transportation": 0.0008378028869628906, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51557, 0.04187]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51557, 0.38192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51557, 0.84795]], "google_gemma-3-12b-it_contains_pii": [[0, 2158, false], [2158, 2520, null], [2520, 5678, null], [5678, 10492, null], [10492, 13805, null], [13805, 15400, null], [15400, 18670, null], [18670, 20883, null], [20883, 24526, null], [24526, 28505, null], [28505, 29405, null], [29405, 32528, null], [32528, 35694, null], [35694, 38711, null], [38711, 40240, null], [40240, 41551, null], [41551, 47547, null], [47547, 50502, null], [50502, 51557, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2158, true], [2158, 2520, null], [2520, 5678, null], [5678, 10492, null], [10492, 13805, null], [13805, 15400, null], [15400, 18670, null], [18670, 20883, null], [20883, 24526, null], [24526, 28505, null], [28505, 29405, null], [29405, 32528, null], [32528, 35694, null], [35694, 38711, null], [38711, 40240, null], [40240, 41551, null], [41551, 47547, null], [47547, 50502, null], [50502, 51557, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51557, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51557, null]], "pdf_page_numbers": [[0, 2158, 1], [2158, 2520, 2], [2520, 5678, 3], [5678, 10492, 4], [10492, 13805, 5], [13805, 15400, 6], [15400, 18670, 7], [18670, 20883, 8], [20883, 24526, 9], [24526, 28505, 10], [28505, 29405, 11], [29405, 32528, 12], [32528, 35694, 13], [35694, 38711, 14], [38711, 40240, 15], [40240, 41551, 16], [41551, 47547, 17], [47547, 50502, 18], [50502, 51557, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51557, 0.10706]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
e8f32c72d913061ecd9104d29884c6e620f22354
OASSIS: Query Driven Crowd Mining Yael Amsterdamer\textsuperscript{1}, Susan B. Davidson\textsuperscript{2}, Tova Milo\textsuperscript{1}, Slava Novgorodov\textsuperscript{1}, and Amit Somech\textsuperscript{1} \textsuperscript{1}Tel Aviv University, Tel Aviv, Israel \textsuperscript{2}University of Pennsylvania, Philadelphia, PA, USA ABSTRACT Crowd data sourcing is increasingly used to gather information from the crowd and to obtain recommendations. In this paper, we explore a novel approach that broadens crowd data sourcing by enabling users to pose general questions, to mine the crowd for potentially relevant data, and to receive concise, relevant answers that represent frequent, significant data patterns. Our approach is based on (1) a simple generic model that captures both ontological knowledge as well as the individual history or habits of crowd members from which frequent patterns are mined; (2) a query language in which users can declaratively specify their information needs and the data patterns of interest; (3) an efficient query evaluation algorithm, which enables mining semantically concise answers while minimizing the number of questions posed to the crowd; and (4) an implementation of these ideas that mines the crowd through an interactive user interface. Experimental results with both real-life crowd and synthetic data demonstrate the feasibility and effectiveness of the approach. Categories and Subject Descriptors H.2.8 [Database Applications]: Data mining 1. INTRODUCTION Consider the following scenario: Ann is planning a vacation in New York City with her family. In particular, she is interested in finding combinations of popular child-friendly activities and a nearby restaurant to eat at afterwards, and for each such combination, useful related advice (e.g., walk or rent a bike). She immediately thinks of two options: searching the web, or posting a question on some forum to receive input. However, both of these options have drawbacks. Web search (e.g., using a search engine, or a dedicated website like TripAdvisor) may return valuable information, but if Ann queries for child-friendly activities or for good restaurants she would still need to sift through the results to identify the appropriate combinations: Not all good restaurants, even if child-friendly, are appropriate after a sweaty outdoor activity; a restaurant may be geographically close to some attraction but not easy to access; and so on. Moreover, much of the information is text-based so finding related advice (e.g., walk or bike) may be time consuming. Forums, on the other hand, are more likely to yield detailed answers relevant to Ann’s specific question. However, she would again receive a number of (wide-ranging) text-based results, which she would then have to manually examine and aggregate, to extract the desired information out of them. In this paper, we explore an alternative approach which broadens crowd-based data-sourcing by enabling users to pose general queries to the crowd and receive concise, relevant answers that represent frequent, significant patterns. The user’s interaction with our system is based on two types of data sources: an ontology that captures general knowledge, and a crowd of data contributors with personal knowledge. Returning to our example, the ontology would include facts such as “Maoz Vegetarian is a restaurant in NYC”. These facts are illustrated in the sample ontology in Figure 1 by labeled edges connected to the Maoz Veg. element. Using the ontology, Ann can formulate questions such as “I seek an activity at a child-friendly attraction inside NYC and a nearby restaurant”. The ontology data, which consists of known and relatively stable information, can often be found in publicly available knowledge bases or mined from the web. However, the ontology does not contain information about people’s habits, the frequency with which people do certain activities (the frequency of facts), or combinations thereof (the co-occurrence of facts within fact-sets). For instance, it does not contain information about how often people eat at Maoz Vegetarian when visiting Central Park, or whether they bike rather than walk. For this dynamic and perhaps unknown individual data, the system mines the crowd by asking them questions. A crowd member’s personal history is modeled as a bag of fact-sets (see Table 3), each representing an occasion in their past, which they may not completely recall. A personal history can be thought of as a black-box, some details of which can be exposed through the person’s answers to questions [5]. For example, if asked the concrete question of whether they bike in Central Park, a person may be able to say “Yes, about once a week” even if they cannot remember the exact dates [5]. People may also be able to answer more general yet focused questions, e.g., what types of activities they typically do in Central Park – an open specialization ques- 2. A declarative query language extit{OASSIS-QL} (Ontology ASSted crowd mining Query Language) in which the user information needs can be formulated in order to mine of relevant, frequent fact-sets from the crowd. 3. An efficient query evaluation algorithm for computing semantically concise answers to extit{OASSIS-QL} queries. The algorithm builds its output incrementally, effectively pruning the search space at each step to minimize the number of questions posed to the crowd. 4. A prototype system extit{OASSIS}, which implements the above ideas and evaluates user-specified extit{OASSIS-QL} queries with the help of the crowd, through a dedicated crowdsourcing interface. 5. A demonstration of the effectiveness and efficiency of extit{OASSIS} through an extensive experimental evaluation, which includes both real crowd and synthetic data. \textit{OASSIS-QL} forms the necessary formal foundations for evaluating queries such as Ann’s. Moreover, it could also be used for mining fact-sets from standard databases and thus represents an independent contribution outside of the crowd setting. Our experiments with \textit{OASSIS} show the feasibility of the approach. Important future extensions include using natural language to formulate queries; mining patterns other that fact-sets, e.g., association rules; interactively extending and cleaning the ontology data with the help of the crowd; controlling the selection of crowd contributors; and retrieving only the top-k query answers. \textbf{Outline of paper.} We present a formal model of the vocabulary, ontology and crowd in Section 2. The query language in which user questions are formalized, \textit{OASSIS-QL}, is described in Section 3. Sections 4 and 5 discuss the evaluation of \textit{OASSIS-QL} queries. The implementation of the \textit{OASSIS} prototype system, as well as experimental results, are described in Section 6. Related work is in Section 7, and we conclude in Section 8. \section{2. PRELIMINARIES} We start by presenting a simple, generic model that captures (i) general knowledge, including a vocabulary of terms and an ontology of universal facts, and (ii) individual knowledge, namely the personal knowledge of each member of the crowd, to be collectively mined. These components will be used in formulating crowd mining queries (Section 3). \textbf{Definition 2.1} (Vocabulary). A vocabulary \( \mathcal{V} \) is a tuple \((\mathcal{E}, \leq, \mathcal{R}, \leq_R)\), where \( \mathcal{E} \) and \( \mathcal{R} \) are sets of element and relation names, respectively, and \( \leq \) and \( \leq_R \) are partial orders over these sets, respectively. Elements in \( \mathcal{E} \) can be nouns such as Place or NYC, and actions such as Biking. Relations in \( \mathcal{R} \) can be terms such as inside, nearBy and parentOf. \( \leq \) signifies a semantically reversed subsumption relationship between elements, e.g., biking is a sport and hence Sport \( \leq \) Biking, \( \leq_R \) signifies a similar order over relations. From now on, we assume a fixed vocabulary \( \mathcal{V} \). Next, we define the notion of facts, in the spirit of languages such as RDF and OWL [17, 22]. \textbf{Definition 2.2} (Facts and Fact-sets). A fact \( f \) over \( \mathcal{V} = (\mathcal{E}, \leq, \mathcal{R}, \leq_R) \) is a triple \((c_1, r, c_2)\) \( \in \mathcal{E} \times \mathcal{R} \times \mathcal{E} \). A fact-set \( A \) is a subset of \( \mathcal{E} \times \mathcal{R} \times \mathcal{E} \). "I played basketball in Central Park (last weekend)", which cannot be directly and is only used to model a crowd member’s history; we formalize this by extending \( \leq \) to facts and fact-sets. **Definition 2.5 (Facts and fact-sets partial order).** **Facts.** Let \( f = \langle e_1, r_1, e'_1 \rangle \) and \( f' = \langle e_2, r_2, e'_2 \rangle \) be facts. We say that \( f \preceq f' \) if \( e_1 \subseteq e_2, r_1 \subseteq r_2 \) and \( e'_1 \subseteq e'_2 \). **Fact-sets.** Let \( A \) and \( B \) be fact-sets. \( A \preceq B \) iff for every fact \( f \) in \( A \) there exists a fact \( f' \) in \( B \) such that \( f \preceq f' \). We say that a transaction \( T \) implies a fact \( f \) (resp., fact-set \( A \)) if \( \{f\} \leq T \) (resp., \( A \leq T \)), where \( T \) is viewed as a fact-set. **Example 2.6.** Returning to our running example, the following also hold: letting \( f_1 = \langle \text{Sport, doAt, Central_Park} \rangle \) and \( f_2 = \langle \text{Biking, doAt, Central_Park} \rangle \), then \( f_1 \preceq f_2 \) since \( \text{Sport} \preceq \text{Biking} \); suppose that \( \text{nearBy} \subseteq \text{NYC} \) inside is our vocabulary, then letting \( f_1 = \langle \text{Central_Park, inside, NYC} \rangle \) and \( f_4 = \langle \text{Central_Park, nearBy, NYC} \rangle \), we have \( f_3 \preceq f_4 \). Based on the previous observations, \( \{f_1\} \leq \{f_1, f_3\} \leq \{f_2, f_3\} \); and for \( T_1 \) from Table 3, \( \{f_1\} \leq T_1 \). The significance of a fact-set \( A \) for a person \( u \) is measured by its support in transactions \( D_u \), defined as \( \text{supp}_u(A) := |\{T \in D_u | A \leq T\}| / |D_u| \). **Example 2.7.** Consider again the personal DBs in Table 3, and the fact-set \( f_1 = \{\langle \text{Pasta, eatAt, Pine} \rangle, \langle \text{Activity, doAt, Bronx_Zoo} \rangle\} \). For \( D_{u_1} \) we have, e.g., \( \text{supp}_{u_1}(f_1) = \frac{1}{3} \), since the two facts of \( f_1 \) are implied by \( T_2 \) and \( T_5 \). **Questions to the crowd.** Recall that \( D_u \) is virtual, and so we cannot access it to compute the support for a given fact-set. Instead, we infer this support by asking \( u \) a question about its frequency, as in [3]. We define the following two types of questions to the crowd. **Concrete questions:** retrieve just the support of a given fact-set from the crowd member. **Specialization questions:** given a fact-set ask the crowd member to specify a more specific, significant fact-set along with its support, to speed up information gathering (see Section 4.1). A concrete question about, e.g., \( \{\langle \text{Biking, doAt, Central_Park} \rangle, \langle \text{Rent_Bikes, doAt, Boathouse} \rangle\} \) may be presented to the user as “How often do you go biking in Central Park and rent bikes at the Boathouse?” The user answer could be “Once a month”, which can then be interpreted as, e.g., the support value \( \frac{12}{3} \) (signifying 12 days a year). An example for a specialization question might be “what type of sport do you do in Central Park? How often do you do that?” The answer could be, e.g., “Basketball” for the first part, and “every other Sunday” for the second, translating the frequency to support as in a concrete question. Previous crowd mining work observed that interleaving open-ended and concrete questions is beneficial: specialization questions allow prominent, significant fact-sets to be found quickly, by simply asking people to specify them; whereas, concrete questions are SELECT FACT-SETS WHERE { $w subClassOf* Attraction. $x instanceOf $w. $x inside NYC. $x hasLabel "child-friendly". $y subClassOf* Activity. $z instanceOf Restaurant. $z nearby $x } WHERE [] eatAt $z. Park $x hasLabel "child-friendly". WITH SUPPORT = 0.4 Figure 2: Sample OASSIS-QL Query helpful to dig deeper into people’s memories and discover data they may have not recalled spontaneously [3, 5]. Given the definition for “personal” support, we can define the overall significant fact-sets as the ones for which the average support over all crowd members exceeds some predefined threshold. Choosing the support threshold is a general problem in data mining [1], but in our setting it has an intuitive interpretation: support is the average frequency of a habit, and the support threshold represents the minimum frequency, e.g., to discover habits that the average user does at least 3 times a year once can use the threshold 3/365. Moreover, given the significant fact-sets w.r.t. a certain threshold, we can more easily compute the significant fact-sets for a different threshold, by caching and re-using the crowd answers for the new mining task. See details in Section 6.3. Since we cannot pose all the questions to all the crowd members and crowd answers are not precise, in practice it is necessary to resort to estimations of this average. For simplicity we use below a simple estimation method where each question is posed to a fixed-size sample of the crowd members and the answers are averaged. More generally one could use any black-box (e.g., of [3, 25]) to determine the number of users to be asked and how to aggregate their answers, see Section 4.2. 3. THE OASSIS-QL QUERY LANGUAGE We now present our query language, OASSIS-QL, which extends the RDF query language SPARQL [26] with features that account for mining frequent fact-sets. The syntax and semantics of OASSIS-QL are illustrated via the example in Figure 2, which represents the scenario presented in the Introduction: “Find popular combinations of an activity in a child-friendly attraction in NYC and a restaurant nearby (plus other relevant advice)”. The query answers would include a formal representation of, e.g., “Go biking in Central Park and eat at Maoz Vegetarian (tip: rent the bikes at the Boathouse).”, “Play ball games in Central Park and eat at Maoz Vegetarian” and “Feed a monkey at the Bronx Zoo and eat at Pine Restaurant”, given as fact-sets in RDF notation. The advantages of using OASSIS-QL for crowd mining are: 1) the language is rich, and can be used by experienced users to express a wide range of queries (see the discussion about the expressivity of OASSIS-QL at the end of the section); and 2) since it is based on SPARQL, SPARQL programmers can easily learn it, and user-friendly query formulation tools for inexperienced users can be developed by adapting existing tools for SPARQL. These include, e.g., tools that translate natural language to SPARQL. As a first step, we offer a full language guide with many examples [24], and a user-friendly query editor (see Section 6.2). We next explain how different parts of the query are used to formulate Ann’s intuitive question, and the form of the requested answers. Overview of syntax. The SELECT statement (line 1) specifies the format of the answers, where FACT-SETS requests output in the form of fact-sets. (Alternatively, VARIABLES can be used to request relevant variable assignments.) The WHERE statement (lines 2-9) defines a SPARQL-like selection query on the ontology $O$, which computes a set of possible assignments for the query variables. Using these assignments, the SATISFYING statement (lines 10-14) defines the data patterns to be mined from the crowd. Patterns with a high enough support are returned in the requested format. SPARQL-based features. The WHERE statement defines a meta-fact-set, where some of the elements or relations are replaced by variables denoted by $\$\text{ (e.g., \$x)}$. \$ stands for anything, when we do not care about a variable’s value, as long as one exists. An assignment $\varphi$ maps the query variables to relations and elements. $\varphi$ is said to be valid if by applying it to all the variables in the WHERE meta-fact-set, we obtain a fact-set $A \subseteq O$, i.e., a set of facts which is semantically implied by the ontology. Another useful feature of SPARQL shown in the example is $\$w$ subClassOf* Attraction, which defines a path of 0 or more facts with subClassOf relation connecting $\$w$ and the Attraction element. In this manner, we can select any (perhaps indirect) subclass of Attraction. Basic crowd mining features. Valid assignments to the WHERE statement variables are applied to the variables in the meta–fact-set in the SATISFYING statement, to obtain fact-sets whose support should be mined from the crowd. Intuitively, these are the parts of the user question that are not general knowledge but depend on human judgment (e.g., “popular”). The syntax of the SATISFYING statement is composed of a SPARQL part for defining the meta–fact-set, and a WITH SUPPORT part which defines the required support threshold (see Section 2 for a discussion on how to set this threshold). We say that an assignment $\varphi$ is significant if the support of the fact-set it defines exceeds the threshold. We denote by $A_{\text{WHERE}}$ and $A_{\text{SAT}}$ the meta-fact-set of the WHERE and SATISFYING statements, respectively. Given an assignment $\varphi$, we abuse notation and denote by $\varphi(A)$ the fact-set resulting from applying $\varphi$ to all the variables in the meta–fact-set $A$. Example 3.1. Assuming that the crowd contains only the two users $u_1$ and $u_2$, we explain the semantics of the example query (ignore the colored text, to be explained shortly) on the sample ontology and databases (see Figures 1 and 2 and Table 3). There are several valid assignments w.r.t. the WHERE statement of this query, including the assignment $\varphi_{16}: x \mapsto CentralPark, w \mapsto Park, y \mapsto Biking, z \mapsto Maoz_Veg, and assignment $\varphi_{20}$, which differs from $\varphi_{16}$ by mapping $y$ to Baseball (the assignment indices will be useful in the se- The average support of $\varphi_{16}(A_{\text{Attr}})$ is $\text{avg}(1/5, 1/2) = 5/12$, which exceeds the threshold in line 14, and that of $\varphi_{19}(A_{\text{Attr}})$ is $\text{avg}(1/6, 1/2) = 1/3$. Hence, $\varphi_{16}$ is significant and $\varphi_{20}$ is not. Advanced features MSP. Consider an assignment $\varphi_{16}$ that differs from $\varphi_{16}$ by mapping $y$ to Sport. This assignment is more general (and thus less informative) than $\varphi_{16}$: applying $\varphi_{16}$ on the SATISFYING or SELECT statements yields fact-sets that are more general than those yielded by applying $\varphi_{16}$ (according to the partial order on fact-sets). By default, OASSIS-QL queries return only the maximal (i.e., most specific) significant patterns (MSPs), which form a concise output representation. The rest of the patterns can then be inferred by generalization. To obtain all of the significant patterns one can append the keyword ALL to the SELECT line. See the formal definition of MSPs in Section 4.1. Multic和平性. A user may wish to know if multiple similar facts co-occur in the same occasion. This multiplicity can be specified in the SATISFYING clause of an OASSIS-QL query by attaching to each variable or fact how many instantiations of it are we interested in (see the “+” after $\$y$). Standard notations like $+$, $*$, $?$ can be used for “at least one”, “any number”, “optional”. The default multiplicity is “exactly one”. The semantics of a query with multiplicities is that sets of values are assigned to variables instead of single values. Multiplicity 0 it is equivalent to deleting all the meta-facts containing the variables. More. The MORE keyword in line 13 is used to identify any fact-set which commonly co-occurs with the other facts, i.e., the “...plus other relevant advice, if there is any” part of the user query. This is a syntactic sugar for $\langle \$u \$y \$v+ \rangle$, i.e., any number of unrestricted facts. See [24] for more details and additional advanced OASSIS-QL features. **Example 3.2.** Our example query allows any multiplicity greater than 0 for $\$y$, and any multiplicity of MORE facts. Assignments $\varphi_{16}$, $\varphi_{20}$ and $\varphi_{21}$ (discussed earlier) had multiplicity 1 for $\$y$ and 0 MORE facts. As another example, $\varphi_{16}$ could be extended to include the MORE fact Rent_Bikes doAt Boathouse (multiplicity 1), signifying the combination of biking in Central Park, eating at Maoz Veg., and also renting bikes at the boathouse. Another extension of $\varphi_{16}$ could map $\$y$ to {Biking, Ball Game} (multiplicity 2), signifying the combination of biking in Central Park, playing baseball in Central Park, and eating at Maoz Veg. Both extended assignments are valid w.r.t. the query, but only the former is significant; it yields a fact-set that is implied by transactions $T_5$, $T_3$ and $T_7$ in the example DBs in Table 3, and thus has an average support of $5/12$. Expressivity. As OASSIS-QL is based on SPARQL, its capabilities of performing selection over the ontology, are directly derived from those of SPARQL. As we show in the next section, using multiplicities allows to capture standard frequent itemset mining with OASSIS-QL. To simplify the presentation, we only show here how to mine fact-sets from the crowd for a given support threshold; some additional features, such as mining association rules, are described in the language guide [24], and possible future extensions are discussed in Sections 1 and 8. Figure 3: Example partial order over assignments **Query evaluation.** Note that there is nothing in the syntax or semantics of OASSIS-QL that requires the use of the crowd in the evaluation process, had the user databases been available. Since in practice the user databases are only virtual and cannot be materialized [5] one needs to mine the crowd to obtain the relevant information, and our evaluation techniques are developed accordingly. In the next two sections, we explain how OASSIS-QL queries are evaluated by computing valid assignments and identifying which ones are significant. We first assume in Section 4 that we have all the valid assignments, and discuss how to use the crowd to determine the significant assignments. Computing the set of valid assignments in an efficient manner is then discussed in Section 5. 4. MINING THE CROWD We start by describing query evaluation (focusing on the SATISFYING clause) for a single crowd member, then extend the techniques to multiple users. 4.1 Evaluation with a Single User We describe the vertical algorithm that interactively chooses the next question to the crowd. We start by defining a (semantic) partial order over assignments, and then describe how this order is exploited by the algorithm. Partial order over assignments. We formally define an assignment (with multiplicities) $\varphi$ as a mapping from the variable space $X$ to $P(\mathcal{E}) \cup P(\mathcal{R})$, i.e., to sets of vocabulary elements or relations. The partial order over these assignments is defined as follows. **Definition 4.1.** (Assignment order relation). Let $\varphi$ and $\varphi'$ be assignments. $\varphi'$ is a successor of $\varphi$, denoted by $\varphi \leq \varphi'$, if for every variable $x \in X$ and every value $v \in \varphi(x)$, there exists $v' \in \varphi'(x)$ s.t. $v \leq v'$. We use $\prec$ to denote immediate successors, i.e., $\varphi \prec \varphi'$ if $\varphi \leq \varphi'$ and there exists no $\varphi''$ (different from $\varphi, \varphi'$) s.t. $\varphi \leq \varphi'' \leq \varphi'$. When $|\varphi(x)| = 1$, we may identify $\varphi(x)$ with its single element. **Example 4.2.** We continue explaining the evaluation of the sample query in Figure 2, but to simplify the presenta- tion, we consider only the parts highlighted by a grey background (i.e., without the nearby restaurant). Figure 3 illustrates parts of the order relation over assignments. Each node represents an assignment \( \varphi \), and contains two values, \( \varphi(x), \varphi(y) \). (The value of \( w \) is omitted). There is an edge between \( \varphi \) and \( \varphi' \) if \( \varphi \prec \varphi' \). Some of the assignments, marked by a dashed outline, are not valid w.r.t. the WHERE clause (they are relevant since our algorithm will explore them). Dotted edges denote omitted ancestors or descendants. Node colors are explained in Example 4.5. Recall assignment \( \varphi_{20} \) from Example 3.1, restricted to the variables \( x, y \) (the assignment index denotes the number of the relevant node in Figure 3). Also consider \( \varphi_{17} \) (node 17). It holds that \( \varphi_{17} \preceq \varphi_{20} \), since the value of \( x \) is more specific in \( \varphi_{20} \). In fact, \( \varphi_{17} \prec \varphi_{20} \), since \( \text{Bal}\_\text{Game} \prec \text{Baseball} \). We can now formally define MSP assignments. **Definition 4.3** (MSPs). Given an OASSIS-QL query \( Q \), a valid and significant assignment \( \varphi \) to \( Q \)'s variables is an MSP if \( \varphi \) is maximal w.r.t. \( \leq \), i.e., it has no valid and significant successor \( \varphi' \). We can infer that an assignment is (in)significant according to its successors and predecessors, using the following observation. **Observation 4.4.** If \( \varphi \preceq \varphi' \) then if \( \varphi' \) is significant, so must be \( \varphi \). Since all the significant assignments but the MSPs have significant successors, their significance may be inferred. Hence, the MSPs form a concise representation of the full query result, and it suffices to compute only them. Such an inference scheme forms the core of many crowd/data mining techniques [10, 20, 2]. **Example 4.5.** The color codes in Figure 3 are as follows: MSPs are painted orange; other significant assignments are yellow; insignificant assignments are grey; and the minimal among those (i.e., the most general ones) are painted dark grey. Note that indeed, for every significant assignment, the preceding assignments (its ancestors in the graph) are significant; and this holds symmetrically for insignificant assignments and their descendants. **The vertical algorithm.** The output of the vertical algorithm (Algorithm 1) is the set of valid MSPs. Given a pre-computed set of valid assignments \( \mathcal{A}_{\text{valid}} \), the algorithm first expands this set in line 1 by adding every assignment that is more general than some assignment in \( \mathcal{A}_{\text{valid}} \). This expansion improves the algorithm performance (see below, and Section 6.4, when we compare our algorithm to the naïve approach) as well as the user experience (Section 4.2). Then, it initializes \( M \) to be an empty set. As long as there exists an unclassified assignment, which is not known to be (in)significant, it chooses the unclassified assignment which is minimal by the partial order (i.e., most general), \( \varphi \). It then checks whether \( \varphi \) is significant (by asking the crowd, using the function \( \text{ask}(\cdot) \)). If so, it looks for an unclassified assignment \( \varphi' \) s.t. \( \varphi \preceq \varphi' \). For each such \( \varphi' \), if it is significant, the algorithm updates \( \varphi \) to be \( \varphi' \), and in the next iteration of the internal loop, it searches for an even more specific assignment; and so on. The most specific (maximal) significant assignment found in this manner is appended to \( M \). Finally, the set of valid MSPs is returned as the output. Every call to \( \text{ask}(\cdot) \) can classify multiple assignments (see Observation 4.4). **Example 4.6.** Let us trace one iteration of the outer loop of Algorithm 1 for the user \( u_{\text{Avg}} \), whose answers are the average support of \( u_1 \) and \( u_2 \) from the running example. Assume that the input assignments include all the assignments in Figure 3. At the beginning of the algorithm, all the assignments are unclassified, and thus, node 1 represents the minimal (most general) unclassified one. The algorithm can then ask \( u_{\text{Avg}} \) about a sequence of successors of node 1, for example in the following order: 1, 3, 7, 10, 11, 15, 17, and all of 17’s successors (including 18-20). This order is obtained by replacing \( \varphi \) with discovered significant successors (e.g., node 3) in the inner loop, and ignoring insignificant successors. For 17 no significant successor is found, and hence it is correctly identified as an MSP. In a second iteration of the outer loop, the selected minimal unclassified assignment could be, e.g., node 2. **Algorithm analysis.** The correctness of the vertical algorithm follows from the monotonicity of significance and the correctness of the inference scheme (proof omitted). We show next that the algorithm is efficient in terms of the number of questions posed to the crowd. When multiplicities are introduced, the query language is expressive enough to capture standard data mining: e.g., to capture mining for frequent itemsets, use an empty WHERE clause and \( \text{SATISFYING} \) as the SATISFYING clause. In this case, the total number of assignments can be exponential in the vocabulary size. Luckily, we can show that even when this is the case, the number of crowd questions can be much smaller in practice. We define the crowd complexity of an algorithm that evaluates OASSIS-QL queries as the number of unique questions posed to the crowd by the algorithm. The following proposition states the crowd complexity of the vertical algorithm. This can be proved based on the top-down assignment traversal order, as well as the expansion of $A_{valid}$. **Proposition 4.7.** The crowd complexity of evaluating an OASSIS-QL query is $O(|E| + |R|) |msp| + |msp_{valid}|$. We now show a lower bound. Let $msp_{valid}$ be the set of MSPs among the valid assignments. **Proposition 4.8.** The complexity of computing the answer to an OASSIS-QL query using only concrete crowd questions, is $\Omega(|msp_{valid}| + |msp_{valid}|)$. In practice, $|msp_{valid}|$ and even $|msp|$ are typically of reasonable size, and our optimizations further improve the algorithm performance in practice. See Section 6. **Speeding up with specialization questions.** Consider the meta-fact-set $\forall y \text{ doAt } x$ and assume that we have already established that $\varphi_{y} : x \rightarrow \text{Central Park}, y \rightarrow \text{Sport}$ is significant. This assignment may have many successors by the partial order, as many as the sport types that we have in our vocabulary. However, some of them may be easily pruned by a human user, who knows, e.g., that people do not ski in Central Park. By asking specialization questions about $\varphi$, e.g., “What type of sport do you do in Central Park?”, we are guaranteed to find a following assignment that is at least frequent in the current user’s DB, if there exists any. Thus, we are more likely to discover additional significant assignments quickly. This is especially beneficial in incremental evaluation or when the number of query results is limited. To choose which type of questions to ask, we have used, in previous work, a parameter for the ratio of open-ended crowd questions vs. more concrete ones [3]. In our experiments, instead of forcing the crowd members to answer a specific question type, we allowed them to choose the question type. This was done to study their preferences and to improve their user experience. To study the effect of different ratios of specialization and closed questions, we have varied this ratio in synthetic experiments. See Section 6. **4.2 Evaluation with Multiple Users** We next consider multiple crowd-members working in parallel. Given a set of answers from different crowd members to some question, we assume a black-box aggregator that decides (i) whether enough answers have been gathered and (ii) whether the assignment in question is significant or not. Generally, such a black-box could be designed to ensure the quality of answers, both for individual answers (e.g., outlier detection [21]) and aggregated answers (e.g., error probability, or an average weighted by trust [3]). **The multiple users algorithm.** The assignments per crowd member are traversed in the same top-down order as in the case of a single member, but inferences are done based on the globally collected knowledge. More specifically, Algorithm 1 is changed as follows. 1. The outer loop is executed per user, and can be terminated at any point if the user does not wish to answer more questions. 2. Different user answers obtained through the ask(·) function are recorded per assignment. 3. The if condition within the ask(·) function is changed to “$\varphi$ is overall significant”, which is decided by the black-box aggregator. The aggregator can answer yes, no, and undecided, in which case not enough answers have been collected for $\varphi$, and no inference takes place. 4. The return value of ask(·) is true if “($s \geq \theta$) AND $\varphi$ is not overall insignificant”, to prevent the user from being asked about successors of $\varphi$ if $\varphi$ is not significant either for the current user or overall. 5. In line 8 of the main algorithm, $\varphi$ is added to $M$ only if it became an overall MSP by the user’s answer. The first fact-set about which a user is asked, is a minimal unclassified node, which could be relatively specific among all the assignments. To avoid this, the algorithm can be refined to start the traversal from the overall most general assignment (even if it is already classified), and then navigate to a minimal unclassified assignment. This may lead to some redundant questions, but in practice has the advantage of speeding up the computation because when a general assignment is discovered to be insignificant, its (typically many) successors can be pruned for this user. In addition, it allows for a pleasant user experience. **Crowd member selection.** We have already noted that the black-box aggregator can be used to monitor answer quality. In addition, previous works propose different methods for evaluating crowd workers’ quality, e.g., to filter spammers [18, 27]. These methods can be used here as a preliminary step to filter the crowd members to which we pose questions. In our specific context, two additional methods can be employed to select the crowd members: first, we can check the consistency between the answers of the same user, taking advantage of the fact that the support for more specific assignments cannot be larger. In this manner, we can easily filter out spammers, while perhaps still allowing for small inconsistency in a cooperative member’s answers. Second, the OASSIS-QL query itself may be extended to specify restrictions on selected crowd members; see the discussion in Section 8. **5. COMPUTING THE ASSIGNMENTS** We complete the picture by explaining how assignments are computed. Since the number of assignments can be large, we adopt a lazy approach for generating them, feeding them to the vertical algorithm when needed. This optimization is important as many assignments may be pruned along the way and their computation may thus be avoided. Recall that the WHERE clause of OASSIS-QL is specified using SPARQL-like syntax. Without multiplicities, this clause can be efficiently evaluated using a SPARQL query engine. We next explain how to use assignments computed by SPARQL to address multiplicities. **Assignments with multiplicities.** Consider two assignments, $\varphi : x \rightarrow \text{Central Park}, y \rightarrow \text{Biking}$ and $\varphi' : x \rightarrow \text{Central Park}, y \rightarrow \text{Baseball}$. These assignments match on every variable (in this case, $x$) but one ($y$). If $\varphi$ and $\varphi'$ are valid w.r.t. the WHERE clause, then $\varphi'' : x \rightarrow \text{Central Park}, y \rightarrow \{\text{Biking}, \text{Baseball}\}$ must be a valid assignment for multiplicity 2. In the other direction, if \( \varphi' \) is valid, then by definition \( \varphi \) and \( \varphi' \) are valid, being subsets of \( \varphi' \). We say that \( \varphi'' \) is a combination of \( \varphi \) and \( \varphi' \) if there exists a variable \( x \in X \) s.t. for every \( y \neq x \), \( \varphi''(y) = \varphi(y) = \varphi'(y) \), \( \varphi''(x) = \varphi(x) \cup \varphi'(x) \) and \( \varphi(x), \varphi'(x) \subset \varphi''(x) \). I.e., \( \varphi \) and \( \varphi' \) differ only by their assignment to \( x \), and \( \varphi'' \) is equivalent to their “union”. **Proposition 5.1.** Every valid assignment with multiplicity \( i > 1 \) for some variable \( x \) is a combination of two valid assignments with multiplicity \( j < i \) for \( x \). By induction, this means that we can lazily compute assignments of any multiplicity greater than 1 as a combination of multiple assignments with multiplicity 1. It is left to handle the computation of multiplicity 0: since this requires removing some of the conditions in the \texttt{WHERE} clause, the result may include values that do not appear in assignments with multiplicity 1. Moreover, each combination of multiplicity 0 for a different set of variables may include a different set of values. Here, we have no choice but to compute the union of all the possible combinations of variables with multiplicity 0. **Expanding the assignment set.** We have discussed how to compute all the valid assignments. Last, we exemplify the expansion of the assignment set (line 1 of the algorithm). Note that these assignments can also be generated in a lazy manner, as needed by the algorithm. **Example 5.2.** Consider lazily computing the assignments of Example 4.6: assume that we keep records of the valid assignments computed so far (at first, these are only the SPARQL results with multiplicity 1, i.e., nodes 11 and 13-17). Node 1 is computed first and generates a question to the crowd. Next (iterating in the inner loop), node 3 is computed by specializing \texttt{Attraction to Outdoor}. Then we must verify that this node is a predecessor of a valid assignment and unclassified. Next, node 7 is computed, and so on. Node 18, which has multiplicity 2, is computed as successor of 17 by lazily combining assignments 16 and 17. ## 6. IMPLEMENTATION We have implemented the techniques described in the previous sections in OASSIS, a prototype engine for crowd-assisted evaluation of OASSIS-QL. We start this section by describing the system architecture, and the user interface. Then, we describe two sets of experiments that we have conducted: first, experiments with a real crowd; and synthetic experiments, whose goal was to examine the effect of varying properties of the data on the algorithm’s performance. ### 6.1 System Architecture The OASSIS prototype system is implemented in Python 2.7 and uses a MySQL 5.6 database. External libraries include RDFLIB for handling RDF data and NetworkX for constructing the DAG that represents the partial order over assignments.\(^5\) It uses two data repositories: an ontology in RDF format, and CrowdCache, which stores the computed assignments along with the answers collected from the crowd for each of them. When OASSIS-QL queries are executed by the system, the valid assignments (without multiplicities) are computed using the RDFLIB SPARQL engine. The assignments are then sent to a module called AssignGenerator, which is responsible for (lazily) computing additional assignments (as described in Section 5) and the assignment DAG. A different module, QueueManager, is responsible for executing our multi-user algorithm by traversing the assignment DAG and generating a queue of questions for every crowd member who is currently active in the system. QueueManager occasionally invokes AssignGenerator in a request to compute a successor of some assignment (w.r.t. \( \leq \)), prunes assignments from the queues if they are no longer relevant (to a concrete user or globally), and updates the collected data in CrowdCache. ### 6.2 User Interface We developed a web UI (in PHP 5.3), which is used both by the user who poses the query, and by the crowd. For the former, the UI includes (i) an OASSIS-QL query editor, with query templates, and auto-completion for language keywords and ontology elements and relations; and (ii) a UI for browsing the obtained output. As mentioned in Section 3, our vision is to adapt existing intuitive UIs for SPARQL queries to our context. Such tools include graphical query construction (e.g., [13]), and the translation of natural language questions to SPARQL (e.g., [6]). For the crowd, our UI tool also serves as a dedicated crowdsourcing platform. General-purpose crowdsourcing platform such as Amazon Mechanical Turk\(^5\) did not suit our needs, which include dynamically computing the questions per crowd member based on previously collected data. In our UI, the crowd is engaged to contribute information via a social questions-game where they are asked query-relevant questions about their habits and experiences. Users are awarded stars (bronze, silver and gold) as they answer more questions, and can use them as virtual money either to pose queries of their own to the system or to view suggestions computed in response to previous queries. A statistics page commends the top-20 users. Questions are retrieved iteratively from the user queue and are then automatically translated into a natural language question using templates. These templates are domain-specific, and can be manually created in advance. E.g., the assignment \( \varphi_{17} \) from Example 4.2 can be presented as “How often do you engage in ball games in Central Park?”, the ontology elements in bold being plugged into the template. The user answers the question by clicking an option out of “never”, “rarely”, “sometimes”, “often” and “very often”, which we interpret as support values 0, 0.25, 0.5, 0.75 and 1, respectively. To answer specialization questions, the user can choose to “specify” any part of the presented question. When the user starts typing, auto-completion suggestions that match both the entered prefix and the query are presented to the user. In designing the interface of OASSIS, we have made two optimizations based on feedback from preliminary user experiments. First, we allow users, via a single click, to indicate that some value occurring as part of the question is irrelevant for them, in which case we infer that every assignment that involves this value or more specific values, has support 0. This optimization, called user-guided pruning, allows us to prune large parts of the assignments DAG for this user. Second, in specialization questions, if none of the --- \(^5\)www.rdflib.net and networkx.github.io auto-completion suggestions are relevant for the users, they can choose none of these, which assigns support 0 to all the proposed assignments; in this manner, we obtain the answers to many concrete questions at once, without incurring additional user effort. We have also added a more button, which opens a text input and allows the user to enter additional advice for the current assignment (corresponding to the MORE part of the query). 6.3 Results for Real Crowd In this set of experiments, we have used a real crowd and a real-life ontology and vocabulary combining data from Wordnet [23], YAGO [30] and Foursquare [9]. We have recruited our crowd members to the experiments through social networks. The black-box used for a decision mechanism was simple: 5 crowd answers were required for classifying an assignment; if the average support exceeded the threshold, it was considered significant. We have used the answers from the crowd to simulate executing the same query with different support thresholds: note that the crowd answers are independent of the threshold. The user answers during the execution with support threshold 0.2 were cached, and then reused in evaluating the query for higher support thresholds. E.g., if following a crowd answer we obtained average support 0.2 for some assignment, the algorithm would continue to a more specific assignment for support 0.2, but in the simulation of threshold 0.4, it would stop. In the statistics below, we count for each threshold only the answers used by the algorithm out of the cached ones. We have experimented with queries from 3 application domains. All of them were chosen to reflect situations in which data is not recorded in a centralized, systematic manner, and people are the main source of knowledge. The first domain is the travel recommendation scenario, where we have executed our running example query, with slight modifications to make it suitable to the target crowd (e.g., replacing NYC by Tel Aviv) and other variations thereof (e.g., asking only about sports activities and locations). The second domain is culinary preferences, where our queries retrieve popular combinations of dishes and drinks of different types (snacks, health food) which can be used, e.g., in composing new restaurant menus or by dieticians. The third domain is self-treatment, where our queries find what do crowd members take in order to relieve common illness symptoms, information which can be used, e.g., by health researchers. In general, the execution of queries from the 3 domains exhibited similar trends, but required a different number of questions to completion, between 340 and 1416 (which we observed to be correlated with the number of MSPs, see Figures 4a-4c and Section 6.4). 248 crowd members in total have answered 20 questions on average per query to which they have contributed. As our user base kept growing be- To this, one should also add the user’s processing time for the posts – reading, extracting the relevant components, aggregating the suggestions, identifying consensus, etc. We provide the user with answers that are already aggregated, starting the first MSP. The answers that we provide are also structured, comprehensive and relevant, and we generally provide them faster. To highlight the key aspects in query execution, we further detail in Figure 4 about three example queries from the three domains, where the query in the travel domain corresponds to the running example. The selected queries exhibit two general situations: first, in the running example query, we are interested in instances (of places and restaurants) and hence some of the discovered MSPs may not be valid w.r.t. the query (e.g., an MSP that contains the element “Italian restaurant” rather than a specific restaurant); and second, in the two other queries we are interested in classes, and hence all the MSPs are valid (e.g., both “Pizza” and “Italian food” are considered as classes and can appear in a valid MSP). The queries also exhibit 3 extreme cases: the travel query required the most crowd questions to complete, the self-treatment query required the fewest questions, and the culinary query had the largest number of possible assignments in the DAG: in total, the DAGs of the three queries contained 4773, 10512 and 2307 nodes respectively (without multiplicities). Figures 4a-4c show different statistics about the 3 queries, for threshold values ranging from 0.2 to 0.5, all scaled to obtain similar bar heights. #MSPs and #valid represent, respectively, the total number of MSPs and the number of valid MSPs, which, as expected, generally decrease as the threshold value increases. #questions represents the total number of questions asked including repetitions: unlike in Section 4, here we wish to measure the exact overall user effort, including asking multiple crowd members about the same assignment according to the multi-user algorithm. Observe that this number of questions decreases as there are fewer MSPs and larger parts of the DAG can be pruned. To illustrate the amount of questions saved by our algorithm, baseline% compares the number of questions we ask to a baseline algorithm, which only asks 5 questions for every valid assignment without any specific traversal order. Even when our algorithm considered an expanded set of assignments (as in Figure 4a), it asked at most 24% of the baseline algorithm questions, and this dropped to <5% in queries where such expansion was not needed (Figures 4b-4c). In addition, it provides a better user experience. Figures 4d and 4e show, for the travel query and self-treatment query respectively, the pace of data collection for threshold 0.2 (the second query behaves similarly to the third and hence its graph is omitted). This is illustrated by the number of questions posed as a function of the % of (i) discovered MSPs (ii) discovered valid MSPs, which is only relevant to the travel domain query, and (iii) classified valid assignments (either as significant or not) at each point of the execution. Observe that generally, towards the end of the execution, classifying each remaining assignment requires more crowd answers: these are typically isolated unclassified parts of the DAG, which cannot be inferred from other --- assignments. Comparing the two graphs, the self-treatment query required fewer questions in general, but the first decisions on MSPs and classified assignments were done much more quickly. This is since our user base had grown between the executions of the two queries, and thus answers from more users and at a higher rate were obtained during the self-treatment query execution. Out of the crowd answers, we had 12% for specialization questions, out of which half (6% in total) got a “none of these” answer, 13% user guided pruning and the rest were for concrete questions. This reflects the higher effort incurred to crowd members by specialization questions, since users preferred to answer concrete questions, and additionally shows that the optimizations we introduced (none of these, pruning) were useful to the crowd. Multiplicities. In each of our queries, we have discovered up to 25 MSPs with multiplicities. In one of the culinary queries we found, among others, that crowd members often have a steak with fries and a coke; or, more surprisingly, that when they eat muesli with yogurt for breakfast they drink apple juice. From the more input we have obtained some nice tips from users, e.g., for doing push-ups in a park (an assignment for a travel domain query) it is advisable to lean on a soft surface. We note that in the statistics shown above, we fed to the naïve algorithm only the assignments with multiplicities that our algorithm had generated, for fairness. The effect of our lazy assignment generation is demonstrated by our synthetic experiments in the next section. 6.4 Results for Synthetic Data The next set of experiments involves synthetic data, simulating the crowd answers for varying data properties. To isolate the effect of varying different properties, we used a simulation of a single user. The results were averaged over 6 trials. The properties studied are as follows. Shape of the DAG. We have examined the effect of the assignment DAG width and depth on the algorithm performance. For that, we have used a DAG similar to the one generated in our crowd experiments with the travel query, but varied its width between 500 and 2000, and its depth between 4 and 7, by arbitrarily pruning/replicating parts of the DAG. (In comparison, the width of the DAG in the crowd experiments is around 1350 with depth 7.) Number of (valid) MSPs. We have considered different number of MSPs ranging from 1% to 10% of the nodes. In practice, the % of MSPs is likely to be much lower than 10% and was around 1.2% in our crowd experiments. Distribution of MSPs in the DAG. We have used three methods of generating MSPs: (1) using a uniform random distribution over assignments (while guaranteeing that the MSPs are not comparable), (2) biased towards selecting MSPs that are nearby in the DAG, i.e., are separated by at most 4 nodes, (3) biased towards selecting MSPs that are far away in the DAG, i.e., are separated by at least 6 nodes. For each such variation, we have generated MSPs in the entire DAG, or only among valid assignments. Number and size of MSPs with multiplicities We have varied the number of MSPs with multiplicities between 0 and 5% (out of the total nodes), and their size between 1 and 4 (which was the maximal size of MSP with multiplicity in the crowd experiments). Ratio of specialization vs. concrete questions answered. We set the ratio of answers obtained to such questions in our simulation to 0%, 10% (similarly to the ratio observed in crowd experiments), 50% and 100%. These were simulated by providing the algorithm a significant successor of the current assignment. Ratio of user-guided pruning clicks We have set the ratio of user-guided pruning clicks obtained to 0%, 25% (similarly to the ratio in crowd experiments) and 50%. A very high ratio of user-guided pruning is not interesting, since it means that there are no significant assignments. We compare our algorithm to two alternative approaches: - **Horizontal.** Inspired by the classic Apriori algorithm [1], this algorithm asks about assignment \( \varphi \) only after verifying that all of its predecessors are significant. - **Naïve.** An algorithm that randomly chooses an assignment among the valid ones. The alternatives use the same inference scheme as our algorithm and avoid questions on classified assignments. We have noted, in our experiments, that varying the shape of the DAG and the distribution of the MSPs in the DAG had no significant effect on the observed trends. Hence, we present here the results only for a DAG of width 500 and depth 7, and a random distribution of selecting MSPs only among valid assignments. Varying the number of valid MSPs. Figure 5 shows the results for different numbers of chosen valid MSPs. For each such number, and for the different algorithms, the figure shows the number of questions required to discover X% of the selected valid MSPs. For example, for the case of 10% valid MSPs, the vertical algorithm found 20% of these MSPs (0.2% of the total assignments) after asking 80 questions. With respect to the horizontal algorithm, our top-down vertical algorithm starts returning answers to the query much faster (e.g., it asks fewer than 35% of the questions asked by the horizontal algorithm to discover 20% of the MSPs), which is very useful in practical scenarios since answers can be returned faster, as soon as they are identified. As a higher % of MSPs are found, the gap becomes smaller, since the vertical algorithm saves questions on significant assignments by its traversal order but may “waste” some questions on insignificant assignments in an attempt to find a significant successor. The na"ive algorithm performs well only when there is a high % of MSPs and thus discovering them by a “lucky guess” is possible (which, as mentioned above, is not the case in realistic scenarios). Varying the number and size of multiplicities. Our experiments showed that the number of questions depends on the % of MSPs, and not on whether they include multiplicities. They also show that the lazy approach for generating DAG nodes with multiplicities was proven very efficient: in all the experiment variations, OASSIS has generated less than %1 of the nodes, in comparison to an “eager” algorithm that generates all the nodes up to the same multiplicity. Varying the ratio of answer types. Figure 4f shows the number of questions required to discover X% of the valid MSPs for different ratios of specialization questions and user-guided pruning. In all cases, a high ratio of these special types of questions improved the algorithm performance (although not by much). However, we note that as more and more assignments are pruned, the number of choices in specialization questions decreases, rendering concrete questions preferable in a real crowd setting (as they incur less user effort). Based on this and on user feedback, we propose allowing users to choose the question type, but offer a higher reward for specialization questions. Determining the exact reward is left for future work. 7. RELATED WORK Data procurement is one of the most important and challenging aspects of crowdsourcing [8]. Some recent work (e.g., [7, 11, 19, 21, 25, 31, 32, 34]) suggests the construction of declarative frameworks which outsource certain tasks to the crowd, including the execution of common query operators such as filter, join and max, the task of populating a database, information extraction [14, 33], etc. However, the present paper is the first to propose a declarative framework for specifying data patterns to be mined from the crowd. The work of [16] focuses on evaluating planning queries with the crowd. While it also considers incremental selection of crowd questions, our construction of the assignment order relation renders our problem very different from theirs. Previous work in crowd mining [3, 2], by some of the present authors, is the most related to ours. In [3], we consider mining association rules from the crowd, however (i) the approach is not based on an ontology; and (ii) it is not query-based, and thus users cannot direct the mining process. [2] studies the theoretical complexity of mining frequent itemsets from the crowd but does not consider a query language or system-related challenges, which are studied in the present work. Mining frequent fact-sets in our setting corresponds to frequent itemset discovery, which is a fundamental building block in data mining algorithms (see, e.g., [1]). The idea of using item taxonomies in data mining, which correspond to our semantic order relation over terms, was proposed for the first time in [28]. We extend the semantic partial order over terms to further capture facts, fact-sets and variable assignments. We also mention, in this context, work on the discovery of interesting data patterns through oracle calls [20, 10]. In particular, the traversal order of assignments that we consider for the top-down algorithm is inspired by the Du- alize and Advance algorithm of [10]. Similar ideas were also employed in the context of frequent itemset mining from the crowd by [2]. However, it requires further enhancements in our setting to support queries, the traversal of assignments to query variables, user sessions, etc. OASSIS-QL combines capabilities from SPARQL for RDF processing [26] with ideas from data mining query languages such as DMQL [12] (see [4] for a survey), and enhances them to obtain a query language for crowd mining. Specifically, we use SPARQL-like syntax for the part of the query that involves the ontology, and constructs inspired by DMQL for specifying the form of the patterns to be mined. The idea of evaluating such queries using the crowd is new. Finally, we mention work on mining RDF data [15, 29]. [15] considers mining of RDF concept graphs, and uses a semantic relation over facts similar to ours. However, they do not consider query-driven mining, or mining the crowd. Adapting their techniques to our setting is an intriguing direction for future work. 8. CONCLUSION AND FUTURE WORK In this paper, we introduced OASSIS, a system which allows users to specify data patterns of interest and mine them from the crowd. The model used in OASSIS captures both ontological knowledge and the individual history of crowd members from which significant patterns can be mined, and its query language OASSIS-QL allows users to formulate their information needs. We further presented an efficient query evaluation algorithm for mining semantically concise answers, which is implemented by OASSIS. Our experimental results, both with a real crowd and synthetic data, indicate the effectiveness of our solution. We have only presented in this paper how to mine fact-sets, and have greatly simplified the model in order to give a precise analysis of our algorithm's complexity. Additional features can be found in the language guide [24]. Useful future language extensions include returning the top-k answers or diversified answers; selecting the crowd members, which can be done by adding a special SPARQL-like selection on crowd members to OASSIS-QL; and more. We believe that many of the same principles developed here would still apply in the context of such extensions, e.g., lazy assignment computation, and asking crowd members questions "in context". The OASSIS engine and its UI can also be extended in many interesting ways, some of which were already mentioned throughout the paper: formulating the queries in natural language can be done by parsing a natural language sentence, constructing the where part of SPARQL as in [6], and identifying the parts that need to be mined from the crowd (subjective/personal content). Additional interesting possible extensions include employing an interactive refinement process for the OASSIS-QL query based on the collected answers; dynamically extending the ontology based on crowd answers; and so on. 9. ACKNOWLEDGEMENTS We are grateful to the anonymous reviewers, for their insightful comments. This work has been partially funded by the European Research Council under the FP7, ERC grant MoDaS, agreement 291071, by the NSF grant III-1302212 and by the Israel Ministry of Science. 10. REFERENCES
{"Source-Url": "http://www.cs.tau.ac.il/~yaelamst/a/amsterdamer2014oassis.pdf", "len_cl100k_base": 14213, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 46864, "total-output-tokens": 16735, "length": "2e13", "weborganizer": {"__label__adult": 0.0004169940948486328, "__label__art_design": 0.000903606414794922, "__label__crime_law": 0.0005292892456054688, "__label__education_jobs": 0.006626129150390625, "__label__entertainment": 0.00029921531677246094, "__label__fashion_beauty": 0.00029158592224121094, "__label__finance_business": 0.0009140968322753906, "__label__food_dining": 0.0005221366882324219, "__label__games": 0.0018825531005859375, "__label__hardware": 0.0008254051208496094, "__label__health": 0.0007424354553222656, "__label__history": 0.000823974609375, "__label__home_hobbies": 0.0002532005310058594, "__label__industrial": 0.0006008148193359375, "__label__literature": 0.0012111663818359375, "__label__politics": 0.0004472732543945313, "__label__religion": 0.0006036758422851562, "__label__science_tech": 0.34716796875, "__label__social_life": 0.00035500526428222656, "__label__software": 0.0762939453125, "__label__software_dev": 0.55712890625, "__label__sports_fitness": 0.00035119056701660156, "__label__transportation": 0.0006232261657714844, "__label__travel": 0.0003802776336669922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65851, 0.0217]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65851, 0.38489]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65851, 0.91489]], "google_gemma-3-12b-it_contains_pii": [[0, 4971, false], [4971, 8468, null], [8468, 12031, null], [12031, 18261, null], [18261, 24051, null], [24051, 29628, null], [29628, 36318, null], [36318, 43150, null], [43150, 49473, null], [49473, 53825, null], [53825, 59447, null], [59447, 65851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4971, true], [4971, 8468, null], [8468, 12031, null], [12031, 18261, null], [18261, 24051, null], [24051, 29628, null], [29628, 36318, null], [36318, 43150, null], [43150, 49473, null], [49473, 53825, null], [53825, 59447, null], [59447, 65851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65851, null]], "pdf_page_numbers": [[0, 4971, 1], [4971, 8468, 2], [8468, 12031, 3], [12031, 18261, 4], [18261, 24051, 5], [24051, 29628, 6], [29628, 36318, 7], [36318, 43150, 8], [43150, 49473, 9], [49473, 53825, 10], [53825, 59447, 11], [59447, 65851, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65851, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
95419c930eccef73bc101ce82eb727d2f601ccb5
Defect Prediction and Dimension Reduction Methods for Achieving Better Software Quality Dhavakumar. P, Gopalan. N. P Abstract: In quality of software, a fault discovery course is anticipated; intended to recover the taking up of various methods using cluster classifiers. Initially the classifiers are qualified on software record and then utilized to forecast if a forthcoming transformation originates a defect. Shortcomings of previous classifier based error prediction methods are inadequate presentation for realistic utilization and slow-moving forecast times due to a huge number of learned machine characteristics. Feature selection is a procedure in choosing a subset of pertinent characteristics so that the eminence of forecast replica can be enhanced. So that prediction recital of grouping techniques will be enhanced or sustained, whereas learning instance is considerably abridged. This effort commences by presenting a general idea of the datasets for error prediction, and then features a novel procedure for feature assortment by means of wrapper methods namely Fuzzy Neural Network (FNN) and Kernel Based Support Vector Machine (KSVM). The features chosen from FNN and KSVM are measured as significant characters. This effort examines numerous feature selection wrapper methods that are normally appropriate to grouping based error prediction. The system castoffs not as much of significant characters until optimal grouping recital are attained. The whole number of characters utilized for guidance is considerably reduced, frequently to lower than 15% of the unique. The general performance metrics is make used to estimate grouping systems such as accurateness, Recall, Precision, and F-Measure. It demonstrates that the anticipated Hybrid Hierarchical K-Centers (HHKC) grouping executes enhanced software quality compared to conventional grouping methods. Index Terms: Defect prediction and prevention, static code features, feature selection, Kernel based Support Vector Machine (KSVM), Hybrid Hierarchical K-Centers (HHKC). I. INTRODUCTION Deficiency prediction in software practised a flow of attention from researchers throughout the earlier period [1]. Fault in an application can direct to a damaging circumstances in all stages of software improvement procedure. Whatever thing connected to fault are a repeated procedure and not a position. Defect avoidance movement starts from understanding of faults. Faults denote to incorrectness or flaw in the method of software. The term imperfection refers to a fault, mistakes, failure or error. The IEEE Standard describes the subsequent stipulations as Error: a human mistakes that directs to erroneous outcome. Fault: mistaken result produces a fault. A Failure: incapability of a job to congregate the expected needs [2]. Defect Prevention is a procedure to recognize faults, their origin grounds counteractive and preventative methods engaged to avoid them from chronics in future, thus leads to the manufacturing of excellence software merchandise [3-4]. Therefore, associations should not obligatory for fault discovery and avoidance plans for long-term Return on Investment (ROI). The forecast of fault-proneness has been deliberated widely [5-6]. In steady growth surroundings, metrics were utilized to forecast components that are probable to harbor errors. Many of the researchers support to make use of product metrics, like Halstead difficulty, McCabe’s cyclomatic difficulty, and diverse code size ways to forecast fault-prone components [5-6], whilst remaining are sceptical of certain basic methods [7]. V&V course book [8] for instance, advocate the use of static code metrics to choose whether components are commendable of physical examinations. The knowledge with validation facility, NASA software Independent Verification along with numerous huge government software service providers won’t reconsider software components if not equipment like McCabe forecast that they are error prone. The utilization of certain measures are contentious. Fenton proffers an instance where, the similar program is accomplished using diverse programming language and building consequential in diverse standing dimensions. Shepperd and Ince comment that “for a huge class of software cyclomatic difficulty is no additional than a substitute for, and in numerous case held out by lines of code” [7]. Prior work Kim et al [9] and Hata et al [10] expose to classifier, when educated with chronological software development data, which can be utilized to forecast the survival of an error in separate file level software modify. The classifier gets initially educated on data originate in chronological transforms and once educated, it can be utilized to categorize a novel imminent alter as being either error or spotless. Imagine a prospect where software professionals have error prediction ability build into their improvement surroundings [11]. As a substitute of a scamp, software professional will obtain criticism from a classifier on neither a transform they devoted is stroller or spotless. Defect Prediction and Dimension Reduction Methods for Achieving Better Software Quality Throughout this procedure, a programmer transform to a source code in to a Software Configuration Management (SCM) scheme, then obtains a fault forecast reverse on the transformation. But the transformation is forecasted to be stroller, the software professionals might execute a variety of performance to discover dormant error, as well as in scripting additional unit tests, evaluating a code examination, or investigating related modifications made somewhere else in the scheme. Owing to necessitate the classifier has up-to-date guidance data, the forecast is executed by a fault forecast overhaul positioned on a server device [11]. While the overhaul is utilized by various engineers speediness is of the spirit when evaluating error predictions. Earlier fault prediction resources improved scalability, because rapid reply times given to authorize one machine to overhaul numerous engineers. An error prediction examination should offer exact forecasts. If engineers are to conviction an error forecast overhaul, it must offer vary only some “false alarms”, changes that are forecasted as errors or fault but which are actually spotless [12]. If numerous fresh transforms are not correctly envisaged as stroller, the developers will misplace confidence due to the error prediction scheme. These schemes error predictors are completed supported on the narration of software, but it constructs less forecast outcomes when evaluated to prediction based on the features of the software. At the same time as certain studies [13] have recommended that error predictors constructed on course metrics, which could break individuals built on merchandise metrics, the research neighbourhood tranquil spotlights seriously on making use of product metrics, as they are greatly simpler to attain. Present study [14] demonstrated that learning from software elements with same features is improved than erudition from complete schemes, so as to its outcome in enhanced fault prediction. The session here is the finest method to forecast errors in a class to gain knowledge of classes that includes related behaviours. Earlier work [15] anticipated a utilization of elements or components to cluster software metrics and to forecast that errors. The assumption is that, the probability of an element or of a component is dependent related on its crisis area [15]. Regrettably, the architectural decay of software schemes are not constantly unambiguous. Disadvantage of previous classifier-based error prediction methods are inadequate recital for sensible utilization and sluggish forecast times owing to the huge amount of machine learned characteristics. Anticipated Hybrid Hierarchical K-Centers (HHKC) grouping is diverse from preceding fault prediction effort that is grounded on clustering mutually data and classes. With esteem to the effort declared over [14] on utilizing module of package stage information for imperfect forecast, execute fault forecast at class level, similar to the majority of fault predictors. Specified the granularity dissimilarity, assessment with these methods is not practicable. With esteem to the effort on grouping to prop up fault prediction [14], cluster mutually associated classes, somewhat than classes with kin properties. Owing to the importance of an explicit characteristic not being recognized apriori, it is essential to begin with a huge characteristic set and steadily diminish characteristics using wrapper grounded feature assortment methods like Fuzzy Neural Network (FNN) and Kernel Based Support Vector Machine (KSVM).The outcomes point out that anticipated HHKC grouping scheme is creates additional correct forecasts than the baseline. This paper is abridged as given below. In segment II, converse associated work, whilst the anticipated method is offered in segment III. The plan of the anticipated experimental investigations is provided in segment IV, whilst the outcomes are conversed in segment IV. At last segment V comments and upcoming directions for research this research paper. II. RELATED WORK In recent times, Menzies et al [16] estimate learning for imperfection prediction from information local to a scheme or from numerous schemes. Grouping of the learning information is grounded on software metrics and not based on source program. The conclusion from this previous work strappingly stimulates this present method. The predictors attained by merging small parts of diverse historical information sources (i.e., the clusters) are better to either simplification fashioned overall data or local modules created from exacting schemes. The assumption is such outcomes grasp within a specified single scheme with fine-grained subsets of the scheme: groups of source code modules with properties in the similar project (intra-release fault prediction). Tan et al [17] recommended the utilization of both a Hierarchical Clustering (HC) and Latent Semantic Indexing (LSI) algorithm to cluster modules that are comparable at the lexical level. In a different way from our method and those obtainable above, the current replicas to forecast error clusters. In addition, the recognition of the groups is not routine, whilst in this method no human interference is require. The outcomes recommend that the predictive replicas construct on the groups surpass on classes within stipulations of recall, precision, and accurateness of the error predicted. Mitchell and Mancoridis [18] proposed and examined a grouping system, known as Bunch. To create a decay scheme in derived systems, Bunch makes use of search methods to divide the graph demonstrating software attributes and its associations. The device is supported on numerous heuristics to find the way through the provided gap of all probable graph division. These heuristics grudgingly influence the overall excellence of the grouping. Also, in a structural method [18] grounded on genetic algorithms is anticipated the cluster software attributes in groups. Alike [19]. the excellence of grouping depends on the description of robustness purpose and search algorithms. Wu et al [20] shows a qualified study of a numeral for grouping algorithms (e.g., an agglomerative grouping algorithm) supported on the Jaccard coefficient and the whole relations follows inform rule using 0.75 and 0.90 as critical points. To divide a software scheme into significant derived systems the entire algorithms require to be physically arranged the requirement of critical points and robustness functions. Bittencourt and Guerrero [21] proposed experimental studies that estimate four grouping/clustering algorithms according to three formerly anticipated criterions: limit of group distribution, reliability and constancy, which were deliberate next to following discharge of four diverse systems. Outcomes recommend that the k-means algorithm execute finest in stipulations of reliability and boundary and also to facilitate the modularization excellence algorithm creates more steady groups. They also indicate out that completely automatic grouping methods without help cannot recuperate component visions in a rational way. Anticipate a grouping and measurement thresholds [22] supported software error forecast method for this demanding issue and investigate three databanks, composed from a Turkish with the producer initializing embedded regulator software. Experimentation revealed that unverified software error forecast can be computerized and sensible outcomes can be fashioned with methods supported on measurement along with thresholds and grouping. The outcomes of this study reveals the efficiency of measurements thresholds and illustrates that the one-stage easier than two-stage method since the assortment of group number is executed heuristically in this grouping based process. On the other hand, the assortment of the group number is completed heuristically in this grouping based replica also. Hierarchical clustering approach [23] is utilized in support of verdict the error components in open source software arrangements (JEdit). This, work examines the ability of hierarchical clustering based method in envisaging error classes. Seliva et al [24] suggested a constraint based semi controls grouping method which utilizes K-means clustering process since the fundamental clustering algorithm on behalf of this crisis. They demonstrated that this method mechanism more than their preceding unverified learning grounded forecast method. On the other hand, whether the assortment of the group number is tranquil and their method utilizes an expert’s area data to iteratively tag clusters as error-prone or not. Consequently, this replica is reliant on the ability of the practised. In this work, the use of Hybrid Hierarchical K-Centers (HHKC) clustering technique and HHKC replica does not necessitate the assortment of group number. In accumulation to boost the forecast outcomes dimensionality decrease is executed primarily prior to performing defect forecast and avoidance task. III. PROPOSED FEATURE SELECTION AND DEFECT PREDICTION METHODOLOGY Enhancing the excellence of the software products has been specified flourishing attempt by the software industry. As the obligation and insist continue on growing the confront present in enhancing the superiority of the software also gets amplified. A software professional’s work is to carryout eminent products designed for their intended expenses and on their dedicated programme. Software merchandise should congregate the customer’s practical requirements consistently and constantly perform the customer’s job. Whilst the software roles are significant to the client, these roles are not working except the program works. Consistency is named that is finely suitable for distinct software eminence. To acquire the software to exercise constantly, though professionals must eliminate approximately all its imperfections abridge the four practical areas like error prevention, error removal, error tolerance and error forecasting which are appropriate in attaining dependable software schemes. Amongst four phases error avoidance and forecast or forecasting is focused by many of the researchers. So in this research work also majorly focus on these two steps to remove faults majorly. Error avoidance is the primary defensive method adjacent to unreliability. An error is certainly not fashioned overheads insignificant to fix. Error avoidance is consequently the intrinsic idea of each software engineering method. Error avoidance methods cannot assurance avoidance of all software errors, so error imperfection or forecast methods are also determined to progress the consistency of the software superiority. A. Dataset Generally these efforts have determined on forecasting the amount of defect that is divergence from stipulation or potential which might direct to malfunction in process of the software method. Prediction results with the quantity of residual imperfections in software methods which were a significant overprotective issue and an imperative revealing quantifies for software developers and measure for possible to carry out quality software methods [25]. Learning fault predictors has been generally given as a well organized method in the area of Software Quality Assurance. Concern them can direct to describe testing main concerns enhanced to avoid fatiguing testing, the generally expensive element of software improvement life sequence [26]. Many fault databanks has been composed from diverse schemes to investigate a variety of statistical and machine learning methods. Menzies [5] commence the baseline experimentation with the use of NASA open field data storeroom [27], leading researchers to utilize that arsenal in command to produce refutable, verifiable, repeatable, and or improvable prognostic replicas in software industries [28]. **Defect Prediction and Dimension Reduction Methods for Achieving Better Software Quality** A.1 Static Code Attributes Static code characteristics are simple and high-quality indicators of replica so as to necessitate re-examining and inspecting. Verification and validation manual like [29] counsel the use of static code complexity aspect to make a decision which elements are commendable of physical examination. It has been assured that NASA IV&V services, if not tools like McCabe forecasts that a little of them might be imperfection level [30]. This statement obviously divulges the significance of static metrics. The aim is to offer scheme with unfocused information to the program commune. Defect forecasters to be educated from databanks enclosing static code characteristics, whose class tag is faulty and whose standards may be false or true. Based on the language, queues portray information from a component, function, way, process or files. If D→all the data in table 1, and Di→ one scrupulous data set Di not equal to D, then it can carry out two sorts of experimentation. <table> <thead> <tr> <th>Task</th> <th>Resource</th> <th># modules</th> <th>Quantity</th> <th>% of faulty</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>pc1</td> <td>NASA</td> <td>1109</td> <td>21</td> <td>6.94</td> <td>Earth information tracking satellites from Flight software</td> </tr> <tr> <td>kc1</td> <td>NASA</td> <td>845</td> <td>21</td> <td>15.45</td> <td>Ground data Store from management</td> </tr> <tr> <td>kc2</td> <td>NASA</td> <td>522</td> <td>21</td> <td>20.49</td> <td>Ground data Storage from organization</td> </tr> <tr> <td>cm1</td> <td>NASA</td> <td>498</td> <td>21</td> <td>9.83</td> <td>Instrumentation of spacecraft</td> </tr> <tr> <td>kc3</td> <td>NASA</td> <td>458</td> <td>39</td> <td>9.38</td> <td>Ground data from Store management</td> </tr> <tr> <td>ar4</td> <td>Turkish white goods producer</td> <td>107</td> <td>30</td> <td>18.69</td> <td>Refrigerator</td> </tr> <tr> <td>ar3</td> <td>Turkish white goods producer</td> <td>63</td> <td>30</td> <td>12.70</td> <td>Dishwasher</td> </tr> <tr> <td>mc2</td> <td>NASA</td> <td>61</td> <td>39</td> <td>32.29</td> <td>Video guidance system</td> </tr> <tr> <td>ar5</td> <td>Turkish white goods producer</td> <td>36</td> <td>30</td> <td>22.22</td> <td>Washing machine</td> </tr> </tbody> </table> Table 1: Sorted in order of number of tasks from http://promisedata.org/data. Necessitate performing some deeds in order to construct the information organized for the processing stage. These pre-processing deeds can like primary dataset alterations, i.e. take away steady attributes and restoring not a figure, or can is chief like changing the data demonstration space. Menzies [30] made little changes to spot the previous data to pertain the logarithmic silt on all mathematical values with expect of recovering the predictor’s recital. Gray [31] has functional and general data organization together with eliminating repetitive and conflicting instances, balancing the data, eliminating stable attributes, substituting missing values, normalization, and randomizing sample order. Particular sum of defects can be prohibited throughout fault elimination method like cultivating development group through guidance, by utilization of formal stipulation and prescribed verifications. It is capable of prohibited by the use of tools, method, technologies and values. B. Feature selection methods By manipulating the quality principles in software enlargement, but on the other hand during imperfection avoidance and forecast task numerous practitioners frequently might disregard the high dimensionality crisis. Feature assortment is a procedure of selecting a subset of pertinent description so that the superiority of forecast models can be uphold or enhanced. So that forecast recital of clustering process will be enhanced or preserved whilst learning time is considerably abridged. In the static code characters of NASA IV&V dataset is declared above, here a few amount of characters is also added in Change Log (CL), File name (F), and New Revision Source Code (RSC). In this research work, commence by presenting a general idea of the transform classification method for error prediction, and aspect a novel procedure for characteristic selection with the use of wrapper scheme like Fuzzy Neural Network (FNN) and Kernel Based Support Vector Machine (KSVM). Subsequently, standard measures for assessing features assortment is deliberated using precision, accuracy, F-measure, recall, ROC AUC are explained in section 4. **Kernel based Support Vector Machine (KSVM):** The Kernel based sustain Vector Machine (KSVM) approach with fuzzy sigmoid kernel purpose is affianced as kernel function for assortment of the static code characteristics like Locs, F-measure, Change Log (CL), Halstead (H), F-measure, File name (F). A KSVM method powerfully discovers the significant static code characters by the process of maximization of scope size among static code type. For static code characteristic selection calculate buggy and spotless recall, precision, F-measure, accuracy, and ROC, AUC with the use of clustering procedure. Trace the outcome in a tuple list. For chosen static code characteristics accomplishes maximum fitness values supported on the above declared factors then those characters are measured as the significant features then residual features is well thought-out as insignificant features or immaterial features in the code. Abridge the procedure of features assortment for NASA IV&V dataset with static code characteristics, it calculates the fitness value amongst static code characteristics; it exploits inner product as metric. Let us believe $\text{scl}^T_1 = (\text{scl}_1^1,\ldots,\text{scl}_1^i)_{i=1}^n$ differentiate the NASA IV&V dataset from categorized static code characteristics. $\text{scl}^T_1 \in \{\pm 1\} \rightarrow \text{features selection under}$ the wrapper grounded schemes, where $C \rightarrow$ number of classes, for $c = 1, \ldots, C$ and $\sum_c \sigma_i y_i^k = 1 \phi(s_i)$, $\rightarrow$ as non-linear mapping incline function, which is performed in accord with the Cover’s theorem [32], it guarantees high error prediction correctness rate for linearly removed static code characters from NASA IV&V databanks and it is usually superior dimensional static code characteristics space $S$ $$f_{y_1} = \phi^T(s_c y_i) W + b \geq 1 - \xi_i \forall i = 1, \ldots, n, \xi_i \geq 0 \forall i = 1, (2)$$ Where $W$ and $b \rightarrow$ biases and weight values for static code characters. The above values taken from NASA IV&V dataset. KSVM development for static code characters is directed by regularisation factor $r$ and it is mechanically chosen or predefined by the consumer, the fault values of forecast with static code characters vectors are symbolizes using the factor $\xi_i$. With the intend of raising the error prediction correctness, a kernel function $K$ is engaged, $$K(s_c y_i^k, s_c y_i^l) = \phi(s_c y_i^k)^T \phi(s_c y_i^l)$$ This kernel function consequence does not get better the error prediction correctness results, direct to beat these complexity, here kernel function are estimated in conformity with fuzzy sigmoid function [33], and it is discrete as given below, $$f(s_c y_i^k) = \text{sgn} \sum_{i=1}^n s_c y_i^k \sigma_i a_i K(s_c y_i^k, s_c y_i^k) + b$$ Where the prejudices value $(b)$ of fuzzy kernel preserve readily considered from the $a_i$, it occurs to be either 0 or C. This work goes behind the process of hyperbolic tangent function [34] and it is provided as follows: $$f(s_c y_i^k) = \text{sgn} \sum_{i=1}^n s_c y_i^k \sigma_i a_i K(s_c y_i^k, s_c y_i^k) + b$$ Where $i \rightarrow$ static value for efficiency of the sigmoid area. In the description of fuzzy logic hypothesis, the sigmoid kernel role is branded as get-together of fuzzy membership functions. Numerous fuzzy membership functions presents; on the other hand determined on three triangular tasks as a outcome of their effortlessly. Consequent to fuzzy sigmoid kernel task be never-ending, thus the equation (6) can be willingly re- written as a task of $a$ and $y$, as specified as below: $$\begin{align*} K(s_c y_i^k, s_c y_i^l) = \begin{cases} -1 & s_c y_i^k, s_c y_i^l \leq y - \left(\frac{1}{a}\right) \\ +1 & s_c y_i^k, s_c y_i^l \geq y + \left(\frac{1}{a}\right) \\ 2(s_c y_i^k, s_c y_i^l - y) & s_c y_i^k, s_c y_i^l - y) \end{cases} \end{align*}$$ This is the final appearance of the unpredictable fuzzy sigmoid function ($fuzzy_tanh$) kernel. The most important remuneration of this function are (1) it performs more speedily, for the cause that the concluding outcomes of prediction probable are articulated in a sequence of saturated examples (Eq. (6)), and (2) it allows to choose various stages of non-linearity by building a option on the amount and pre-eminence of the partisanship functions. Fuzzy Neural Network (FNN): In accumulation novel Fuzzy Neural Network (FNN) learning scheme is anticipated to choose static code characteristics from NASA IV&V dataset. The recommended FNN system chooses static code characteristics with diverse classes. The diagrammatic representation of the anticipated FNN replica is revealed in Figure.1. The anticipated FNN replica for static code features comprises three main steps. In the initial step, the method takes an input as static code characteristics $s_c y_i^k$ and fuzzifies its static code characters values with the use of Membership Functions (MF). MF figure outs a membership matrix thus fashioned enclose amount of rows and columns equivalent to the amount of static code characters $s_c y_i^k(m)$ and their modules, correspondingly. Thus, the initial step of the anticipated FNN learning system haul out the static code characteristics associated NASA IV&V data set to modules throughout the MF that may be supportive for rising forecast accurateness outcomes. The benefit of utilizing NR-type MF [35-36] is that it has a factor, known as fuzzifier $(\mu)$, which can be adjust effortlessly according to the obligation of the crisis. This offers additional well-organized outcomes for error forecast examination. Thus the generalization ability might be forced by choosing an upper cost of fuzzifier $(\mu)$. Defect Prediction and Dimension Reduction Methods for Achieving Better Software Quality ![Diagram](image) **Figure 1. Fuzzy Neural Network (FNN)** In the subsequent action, the partisanship matrix is transformed into a static code characteristics vector by flowing all columns or rows. This flowing static code characteristic turn out to be input to the FNN and therefore the amount of input static code characteristics nodes of the FNN is equivalent to the product of the amount of static code character with association amongst modules results. The final step of the anticipated FNN is executed with the use of MAX process to de-fuzzify the static code character values of the NN. A prototype is allocated to error and non-fault class \(f_1 \& f_2\) with the uppermost class partisanship value with maximum prediction correctness. **Fuzzification:** The membership function produces static code characters \(sc_{ij}(m)\) for characteristic matrix by fuzzification. The partisanship matrix \(F(sc_{ij}(m))\) thus produce articulates the degree to fit in of diverse static code character \(D(sc_{ij}(m))\) to diverse classes \(f_1 \& f_2\), where \(sc_{ij}\) → static code character of pattern \(sc_{ij}\); with \(sc_{ij}\) and \(C \in (1,2,..,C)\). The common pattern of static code description \(sc_{ij}\) is thus symbolized as, \[ sc_{ij} = [sc_{i1},sc_{i2},... ,sc_{ij},...,sc_{ij}][^T] \] Here a familiar-type MF is used to replica a class [36]. The function is distinct as, \[ \pi(sc_{ij};a,r,b) = \begin{cases} 0, & sc_{ij} \leq a \\ 2^{m-1} \left[ \frac{sc_{ij} - a}{r - a} \right]^m, & a < sc_{ij} \leq p \\ 1 - 2^{m-1} \left[ \frac{r - sc_{ij}}{r - a} \right]^m, & p < sc_{ij} \leq r \\ 2^{m-1} \left[ \frac{sc_{ij} - r}{b - r} \right]^m, & r < sc_{ij} \leq q \\ 1 - 2^{m-1} \left[ \frac{b - sc_{ij}}{b - r} \right]^m, & sc_{ij} \geq b \end{cases} \] Note that \(m\) is known as fuzzifier value as 2. The membership function has a centre at \(r\) calculates as the mean of the instruction dataset. It is distinct as \(r = \text{mean}(scf)\) (i.e., standard value of the static code character with frequency coefficient of initial dataset characters and second datasets). The intersect indicates \(p\) and \(q\) are approximated as \(p = \text{mean}(scf) - \frac{[\text{max}(scf) - \text{min}(scf)]}{2}\) and \(q = \text{mean}(scf) + \frac{[\text{max}(scf) - \text{min}(scf)]}{2}\), where \(\text{max}\) and \(\text{min}\) are the maximum and minimum value respectively. For static code characteristic values, the membership matrix after fuzzification procedure is articulated as: \[ F(sc_{ij}(m)) = \begin{bmatrix} f_{11}(sc_{i1}) & f_{12}(sc_{i2}(m)) & \cdots & f_{1d}(sc_{iD}(m)) \\ f_{21}(sc_{i1}) & f_{22}(sc_{i2}(m)) & \cdots & f_{2d}(sc_{iD}(m)) \\ \vdots & \vdots & \ddots & \vdots \\ f_{d1}(sc_{i1}) & f_{d2}(sc_{i2}(m)) & \cdots & f_{dD}(sc_{iD}(m)) \end{bmatrix} \] Where \(f_{dc}(sc_{iD}(m))\) → membership of the \(d\)th feature to the \(c\)th class. The anticipated NF categorization scheme has been executed with the use of the most well-liked feed forward multi-layer perception classifier [36] with three layers provided as hidden, input, and output layers, correspondingly. The amount of nodes in the in-layer is equivalent to the amount of essentials in the partisanship matrix and the amount of nodes in the out-layer is equivalent to the amount of classes that shows error and non-fault outcomes. Amount of nodes in the concealed layer is selected which is equivalent to the square root of the product of the amount of nodes in the output and input layers. The final step of the NF scheme is a firm categorization either double compression or single compression by computing a maximum (MAX) process to defuzzify the output of unique neural network scheme. **C. Defect prevention** Defect avoidance is a significant action in several software projects. In general software associations, the project team spotlights on imperfection discovery and redraft. Thus, defect avoidance, frequently becomes a deserted component. It is consequently sensible to make process that prevents the imperfection from being initiates in the merchandise accurate from early phases of the scheme. Whilst the rate of such method is the negligible, the profit imitative owing to overall cost reduction are considerably advanced evaluated to cost of fixing the imperfection at later phase. Thus investigation of the imperfections at untimely phases diminishes the cost, resources and time requisite. The acquaintance of imperfection inserting process and development enables the fault avoidance. Once this acquaintance is practiced the superiority is enhanced. It also improves the overall efficiency. In this research work, the defect avoidance is completed with the use of Orthogonal Defect Classification (ODC) system [37]. ODC is the general established method for recognizing faults wherein imperfections are clustered into the category rather than it is measured separately. ODC method categorize every defect into orthogonal (mutually exclusive) characteristic some methodological and some administrative. These attributes offer all the information to shift during the enormous amount of data and turn up patterns on which root-cause examination can be performed. This joined with high-quality action preparation and pathway can attain high degree of imperfection decrease and cross learning. **Modified ODC Defect Attributes:** The confront for any software imperfection dimension method is to recognize a minimal set of imperfection attributes, direct to maintain the classification easy and the transparency supplementary to the development procedure minimal, whilst totally mapping all performance of the progress process. ODC utilizes two attributes, Defect Trigger and Defect Type [37], to offer dimension instruments of the informal association of software faults. The Defect Type distinguishes the imperfection grounded on the nature of the transform to fix the fault. It offers a dimension of the development of the merchandise through the improvement process. The fault activate characterizes the fault based upon the channel that grounds the fault to surface and outcome in a failure. It offers a dimension of the confirmation process. **Defect Type:** The “defect type” quality portrays the tangible improvement was completed. For instance, if fixed with fault involves communications among two classes or process it is a line defect. ODC utilizes eight groups for defect category. ODC offers a structure for recognizing defect types and the foundation of errors in a software expansion attempt by means of the feedback offered by examination the defects it puts in its schemes. The imperfection type only support on the significant attributes simply, because reduced the dispensation time. - **Function:** Affects important capability, product interfaces, end-user borders, and boundary with hardware structural design or universal data organization. - **Logic:** influences the efficient of a code component and can be set by re-implementing a procedures or restricted data organization devoid of a required for demanding a high level plan transform. - **Interface:** influences the communication of mechanism through call functions, macros, and/or parameters. - **Checking:** influences program logic with the purpose of correctly authenticates standards and data before they are stock up or used in calculation. - **Assignment:** necessitates transform a small number of lines of code, like initialization of managing slab or data structures. - **Timing/serialization:** influences the correct organization of shared and real-time possessions. - **Build/ merge/ package:** influence invention edition or arrangement; necessitates formal transforms to reconfigure or reconstruct the product. **Defect Trigger:** The “defect trigger” characteristic symbolizes the situation that directs the fault to shell. ODC utilizes three stages of fault triggers, Review and Inspection activates, Unit and Function Test activate and System and Field Test activates. For these faults are establish after the software is unconfined to the customer, the activator is particular from the set of activates for the testing movement that should have most suitably detected the fault prior to discharge. The trigger for a fault is allocated based on the testing action that causes the defect to obvious itself as a breakdown. For instance, if throughout an examination a fault is discovered as a consequence of investigating the plan for compatibility with preceding versions of the scheme the fault activates would be Design Compatibility. The fault trigger offers a calculation of the meticulousness of the verification procedure. The fault activate should not be perplexed with the indication of the imperfection which is the noticeable consequence of a fault that outcomes in a failure. For instance, if during an examination while allowing compatibility of the novel software with preceding versions of the scheme, a reviewer determines an task error that would consequence in a exacting icon not being correctly exhibits, the indication is the image not being shown the activate is rearward compatibility, and the fault type is assignment.Ultimately, the ODC is examines the imperfection data from PC1 and PC2 dataset. This avoidance outcome only analyses fault information, but the precise outcome of fault or non-defect is originated in prediction system only so the subsequent stage of the work software forecast method is anticipated by means of clustering algorithm. **D. Fault prediction using Hybrid Hierarchical K-Centers (HHIKC) Clustering** Clustering algorithms grounded on structural data in source code have been also effectively used in the... examination of the software development architecture [20-21]. For instance, Wu et al [20] proposed a relative study of an amount of grouping algorithms. To divide a software scheme into significant subsystems all procedures necessitate which is physically configured (e.g., the specification of fitness functions and cutting points). Similarly Bittencourt and [21] Guerrero propose an experimental study to estimate four extensively recognized clustering procedures on a numeral of software methods executed in C/C++ and Java. The procedures are: k-means clustering, modularization quality clustering, and edge between’s clustering and design structure matrix clustering. The grouping method accepts here is diverse as it does not necessitate any arrangement to divide classes into groups. Class dependencies are hauled out statically commencing the source code. Consider static references in the modules. The software scheme is symbolized as a tree, where the parent nodes are modules/classes and the edges \( \rightarrow \) dependencies amongst them. Owing to the tree representation, classes/modules are formed into clusters by using Hybrid Hierarchical K-Centers (HHKC) clustering algorithm. As a well-liked clustering algorithm, hierarchical K-means is usually well-organized for software architecture development. But major troubles with the hierarchical grouping are once source codes put jointly in the untimely phase of the algorithm will certainly not be distorted. To resolve this crisis anticipated Hybrid Hierarchical K-Centers (HHKC) algorithm [38], utilizes both the bottom-up (Unweighted Pair Group and top-down (K-centers) approach with Arithmetic Mean (UPGMA) agglomerative hierarchical grouping algorithms to concentrate on the above problem. Group Average Algorithm processes the typical of the pair-wise resemblance of the source code character from A \( \in \{scf_1, ..., scf_n\} \) where \( i \in \{1, n\} \) and B \( \in \{scf_j, ..., scf_m\} \) where \( j \in \{1, m\} \) from every cluster. \[ FastSim(scf_i, scf_j) = \frac{1}{n_A \cdot n_B} \sum_{a \in scf_i \text{ and } b \in scf_j} FastSim(A,B) \] Where \( FastSim(A,B) \) \( \rightarrow \) similarity amongst two source code characteristic vectors from A and B. |n| denotes total number of features from source code t, subsequent to the UPGMA discover K groups in these K' centroids, if two centroids wrecked up in the similar cluster, then every one of their source code can robust in either fault or non-fault group. Consequently using K-centers clustering as a substitute, diverse form K-means, the group center of K-centers is purely efficient as the data point with the utmost resemblance with the previous data points in the similar cluster. Here the K' centroids is chosen supported on the smallest number of distance among two source codes founded on the Euclidean distance assess. Diverse form K-means, the group center of K' centroids is merely rationalized as the data point containing the utmost less Euclidean distance among the other source code characteristic indicates in the similar cluster. Euclidean distance assess is calculated as follows: \[ d(scf_i, scf_j) = \sqrt{\sum_{i=1}^{n} \sum_{j=1}^{m} (scf_j - scf_i)^2} \] **FastSim(A,B)**: Local source code characteristic vectors into source code A and B boosts the forecast recital. At the similar time as exploits the amount of dependencies from the boundary of every group to the nodes among the group, whilst diminishing the amount of dependencies and lesser **FastSim(A,B)** from the groups to the nodes outer the group. The validation behind is concerned in verdict groups of powerfully associated classes which are probable to execute a set of connected features. From this grouping outcomes conducted an experimental assessment to calculate whether HHKC clustering method can be efficiently used for imperfection forecast. **IV. EXPERIMENTATION RESULTS** This segment conducts an experimental assessment to estimate whether HHKC grouping technique can be efficiently utilized for fault prediction. This segment provides the design fundamental experimental examination subsequent to the strategy of the software engineers. To demeanour experimental estimation, executing a sample of a partisan scheme grounded on the instantiation of HHKC grouping technique. This sample has been enhanced as an Eclipse connects. For duplication rationale a discharge of this connects and the investigational information are obtainable on the web. Objective is to decide whether error prediction in modules progresses when constructing prediction replica on groups. Construct two sorts of predictors: (1) utilization of data from a group to educate the replica and to forecast errors for the classes in the groups (2) construct the replica with the use of data from all the modules in the scheme. The imperfection information has been downloaded from the fame databanks (available at promisedata.googlecode.com) of the PROMISE storehouse [39]. For every module the number of imperfections available is revealed collectively with its software metrics. At present describe general recital metrics used to assess prediction accurateness: Precision, Accuracy, F-Measure, Recall, Area under Curve (AUC) and Region of Convergence (ROC). At hand there are four probable results whilst using a grouping on a single transform: - Categorizing an error change as error, \( f \rightarrow f \) (True Positive (TP)) - Categorizing an error change as non-fault, \( f \rightarrow nf \) (False Negative (FN)) - Categorizing a non-fault change as non-fault, \( nf \rightarrow nf \) (True) Negative(TN)) - Categorizing a non-fault change as error, \( nf \rightarrow f \) (False Positive(FP)) Through a high-quality set of instruction data it is used to calculate the sum of number of error transformation accurately classified as error \( n_{e-f} \), error changes wrongly categorized as non-fault \( n_{f-nf} \), non-fault changes right categorized as non-fault \( n_{f-nf} \), and non-fault changes wrongly categorized as error \( n_{nf-f} \). Accurateness is the amount of accurately categorized changes over the sum of number of changes. As convenient naturally more non-fault transform than error changes, this computation could acquiesce an elevated value if spotless changes are being improved forecasted than error changes. This is frequently less pertinent than error precision and recall. \[ \text{Accuracy} = \frac{n_{f-f} + n_{f-nf}}{n_{f-f} + n_{f-nf} + n_{nf-f} + n_{nf-nf}} \] Precision symbolizes the amount of correct error categorization over the sum number of categorization that resultant in an error outcome. \[ \text{Precision} (P) = \frac{n_{f-f}}{n_{f-f} + n_{f-nf}} \] Recall is also defined as the True Positive Rate (TPR), the symbolizes the amount of correct error categorization over the sum number of changes that were essentially error. \[ \text{Recall} (R) = \frac{n_{f-f}}{n_{f-f} + n_{nf-f}} \] F-measure is an amalgamated compute of error change in precision and recall, further accurately; it is the harmonic average recall and precision. As accuracy can regularly be enhanced at the expenditure of recall, F-measure is high-quality compute of the general precision / recall recital of a classifier as it integrates both values. \[ F\text{-measure} = \frac{2 \times P \times R}{P + R} \] To calculate the excellence of the forecasts of the replica executed a k-fold cross justification. The k-fold cross justification is extensively functional to calculate how the outcomes of a statistical examination can be comprehensive to a self-governing data set. In exacting, when the objective is the forecast, the k-fold cross justification is utilized to approximate how precisely a predictive replica will execute in general. The justification set out via k rounds. Every round of the justification engages the dividing the unique dataset into instruction and test sets. The instruction set is utilized to construct the error prediction replica, whilst the test set is subjected to authenticate the replica. The outcomes are averaged above the surrounding. In the data examination, we make use of a leave-one-out cross justification (i.e., \( k = n \) where \( n \) size of the dataset), where the unique dataset is separated into n diverse subsets of instruction and test sets, with every test set enclosing one surveillance. **Figure 2. Accuracy comparative graphs of projects** Figure 2 demonstrates precision relative graph of diverse companies and evaluated with three diverse approaches like K means grouping, Hierarchal Clustering (HC) and anticipated Hybrid Hierarchical K-Centres (HHKC) grouping approaches. Since from the graph, it is obvious that anticipated HHKC amplifies error prediction rate for all projects when contrasted to added clustering grounded prediction approaches. Anticipated HHKC attains uppermost percentage of 94.83% of accurateness for ar5 scheme which is 1.28 % and 2.57 % higher when contrasted to HC and K means grouping approaches correspondingly during execution process. The accurateness outcomes of suggested HHKC is higher for all schemes as the anticipated work character get abridged with the use of feature selection schemes it turns into steady, which resource cost is abridged and software quality is amplified for all schemes. **Figure 3. Precision comparative graphs of projects** Fig. 3 illustrates precision contrast graph of diverse grouping approaches like K means grouping, HC and anticipated HHKC clustering technique. It is obvious in graph, anticipated HHKC clustering technique amplifies... Defect Prediction and Dimension Reduction Methods for Achieving Better Software Quality precision value for all schemes when contrasted to remaining grouping grounded error prediction approaches. As the suggested work characteristics get abridged with the use of feature selection scheme which amplifies the prediction rate of errors. HHKC attains maximum percentage of 94.36 % of precision for ar5 scheme which is 1.21 % and 2.18 % superior when contrast to HC and K means grouping approach correspondingly during execution procedure. ![Figure 4. Recall comparative graphs of projects](image) Figure 4. Recall comparative graphs of projects Figure 4 demonstrates recall contrast graph of diverse grouping methods like K means grouping, HC and suggested HHKC clustering approach. As of the graph, it is obvious that recommended HHKC clustering approach amplifies recall rate for all schemes when contrasted to former clustering grounded error prediction techniques. In view of the fact that the anticipated work features get diminished with the use feature selection techniques which boost the forecast rate of errors. HHKC attains maximum percentage of 93.66 % of recall for ar5 scheme which is 1.28 % and 2.38 % superior when contrast to K means clustering methods and HC correspondingly throughout execution method. ![Figure 5. F-measure comparative graphs of projects](image) Figure 5. F-measure comparative graphs of projects Figure 5 demonstrates F-measure assessment graph of diverse grouping methods similar to HC, K means clustering, suggested HHKC clustering approaches. Since the graph shows, it is obvious that anticipated HHKC clustering process boost F-Measure value for all schemes when contrast to remaining clustering grounded error prediction process. F-measure is an amalgamated compute of error transform precision and recall supplementary exactly; it is the harmonic mean to recall precision. HHKC attains maximum percentage of 94.2413 % F-measure for ar5 scheme which is 1.28 % and 2.4739 % superior when evaluated with K means clustering and HC methods correspondingly at some point in execution process. V. CONCLUSION AND FUTURE WORK Execution of flaw prevention scheme merely imitates a higher stage of process development but is also essentially expensive asset. The finding of faults in development living series assists to avoid the way of faults from prerequisite necessity to plan and from intend to code. Anticipated a Hybrid Hierarchical K-Centers (HHKC) grouping is diverse from preceding defect forecast work that is rooted on clustering mutually modules and information. Owing to the worth of a explicit characteristic not living being recognized a-priori, it is essential to begin with a huge characteristic set and slowly decrease function by means of wrapping base characteristic selection schemes similar to Fuzzy Neural Network (FNN) and Kernel Based Support Vector Machine (KSVM). Static code characters are simple to gather and are high quality pointers of replica that needs to be reconsidered and examined. So these characteristics are abridged with feature selection process. The HHKC grouping is grounded on structural relationships amongst classes, i.e., static references. Experimentation propose that the premature life sequence measures can participate as an imperative function in scheme management, either by indicating necessitate for amplified quality monitoring throughout the progress or by means of the replicas to allocate validation and verification performance. From this grouping outcome illustrates an experimental assessment to calculate whether HHKC grouping method can be efficiently used for imperfection prediction. Future effort is extensive to adjust the Hybrid Hierarchical K-Centers (HHKC) grouping method by considering lexical data into source code. An appealing open crisis for the future effort is to discover the failure rate, outline these error densities, and calculate the software consistency, protection, accessibility from these standards. REFERENCES AUTHORS PROFILE P. Dhavakumar, received B.Tech degree in CSE from Anjalai Ammal Mahalingam Engineering College, Kovilenni affiliated to Bharathidasan University, Trichy, Tamilnadu in 2004. M.E. Degree in Software Engineering from College of Engineering Guindy, Anna University, Chennai in 2010. He is currently working toward the Ph.D. degree at the Department of Computer Application, National Institute of Technology, Trichy, India. His research interests include software engineering, Networking and IoT. Dr. N.P.Gopalan, received M.E. Degree in Computer Science and Engineering from National Institute of Technology, Trichy, Tamilnadu. Ph.D Degree from Indian Institute of Science, Bangaluru, Indian. He is currently working as a Professor at the Department of Computer Application, National Institute of Technology, Trichy, India. His research interests include Data Mining, Web Technology, Distributed Computing, Theoretical Computer science.
{"Source-Url": "https://www.ijrte.org/wp-content/uploads/papers/v8i2/B2703078219.pdf", "len_cl100k_base": 10838, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 52661, "total-output-tokens": 13749, "length": "2e13", "weborganizer": {"__label__adult": 0.0003070831298828125, "__label__art_design": 0.0003597736358642578, "__label__crime_law": 0.00022923946380615232, "__label__education_jobs": 0.0009703636169433594, "__label__entertainment": 6.133317947387695e-05, "__label__fashion_beauty": 0.0001329183578491211, "__label__finance_business": 0.0001888275146484375, "__label__food_dining": 0.0002918243408203125, "__label__games": 0.0007600784301757812, "__label__hardware": 0.0008268356323242188, "__label__health": 0.0003228187561035156, "__label__history": 0.00017952919006347656, "__label__home_hobbies": 9.638071060180664e-05, "__label__industrial": 0.000274658203125, "__label__literature": 0.0002887248992919922, "__label__politics": 0.000156402587890625, "__label__religion": 0.00032639503479003906, "__label__science_tech": 0.0107421875, "__label__social_life": 8.082389831542969e-05, "__label__software": 0.006122589111328125, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.0002353191375732422, "__label__transportation": 0.0002989768981933594, "__label__travel": 0.00014710426330566406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59013, 0.02305]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59013, 0.35522]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59013, 0.86464]], "google_gemma-3-12b-it_contains_pii": [[0, 5084, false], [5084, 11165, null], [11165, 17033, null], [17033, 22661, null], [22661, 27012, null], [27012, 31007, null], [31007, 36809, null], [36809, 42427, null], [42427, 46438, null], [46438, 50910, null], [50910, 58063, null], [58063, 59013, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5084, true], [5084, 11165, null], [11165, 17033, null], [17033, 22661, null], [22661, 27012, null], [27012, 31007, null], [31007, 36809, null], [36809, 42427, null], [42427, 46438, null], [46438, 50910, null], [50910, 58063, null], [58063, 59013, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59013, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59013, null]], "pdf_page_numbers": [[0, 5084, 1], [5084, 11165, 2], [11165, 17033, 3], [17033, 22661, 4], [22661, 27012, 5], [27012, 31007, 6], [31007, 36809, 7], [36809, 42427, 8], [42427, 46438, 9], [46438, 50910, 10], [50910, 58063, 11], [58063, 59013, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59013, 0.05699]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
2791048b94d3a44181d6c68b5f191c4f086b7dd2
Active Device Drivers Sidney Amani, Peter Chubb, Alastair F. Donaldson, Alexander Legg, Leonid Ryzhyk, Yanjin Zhu ‡ NICTA § University of New South Wales ¶ Imperial College London sidney.amani@nicta.com.au Abstract We develop a practical solution to the problem of automatic verification of the interface between device drivers and the operating system. Our solution relies on a combination of improved driver architecture and verification tools. Unlike previous proposals for verification-friendly drivers, our driver development and verification methodology supports drivers written in C and can be implemented in any existing OS. Our Linux-based evaluation shows that this methodology amplifies the power of existing model checking tools in detecting driver bugs, making it possible to verify properties that are beyond the reach of traditional techniques. 1 Introduction Faulty device drivers are a major source of operating system (OS) failures [12, 6]. Recent studies of Windows and Linux drivers show that over a third of driver bugs result from the complex interface between driver and OS [19, 2]. The OS defines complex rules on the ordering and arguments of driver invocations, rules that often are neither well documented nor are stable across OS releases. Worse, the OS can invoke driver functions from multiple concurrent threads, and so driver developers must implement complex synchronisation logic to avoid races and deadlocks. Automatic verification has proved useful in detecting OS interface violations in device drivers. Research on automatic driver verification has followed two main avenues. The first avenue, represented by tools like SLAM [2], Terminator [8], SATABS [7], and Blast [13] focuses on verifying existing Windows and Linux drivers. Despite significant effort invested in improving these tools, they remain limited in the complexity of properties that can be efficiently verified without generating a large number of false positives. The second line of research, represented by the Singularity [10] and RMoX [3] OSs, focuses on making drivers more amenable to automatic verification via improved software architecture. In this architecture each driver has its own thread and communicates with the OS using message passing, which makes the driver control flow and its interactions with the OS easier to understand and analyse. We refer to such drivers as active drivers, in contrast to conventional, passive, drivers that are structured as collections of entry points invoked by OS threads. Singularity and RMoX rely on OS and language support for improved verifiability. They are both process-based OSs, where all system components (not just device drivers) communicate via message passing. Driver development requires getting to grips with new programming languages that provide first-class support for message-based communication and formal specification and verification of communication protocols. In this paper we show that the benefits of active drivers can be achieved while writing drivers in familiar C for a conventional OS. To this end, we present an implementation of an active driver framework for the Linux kernel. The framework does not require any modifications to existing kernel code and allows active drivers to co-exist with conventional drivers. We develop a new verification method that enables efficient, automatic checking of active driver protocols. Our method leverages existing verification tools for C, extended with several novel optimisations geared towards making active driver verification tractable. Like other existing automatic verification techniques, the method is not complete—it helps to find bugs, but does not guarantee their absence. Through experiments involving verification of several complex drivers for Linux, we demonstrate that our driver design and verification methodology amplifies the power of verification tools in finding driver bugs. In particular, many properties that are hard or impossible to verify in conventional drivers can be easily checked on active drivers. 2 Passive vs active drivers In this section we argue that the active driver architecture should be the preferred one for modern OSs. To this end, we discuss the shortcomings of the conventional driver architecture and show how active drivers address these shortcomings. 2.1 Passive drivers The passive driver architecture supported by all mainstream OSs suffers from two problems that complicate verification of the driver-OS interface: stack ripping and concurrency. Stack ripping A passive device driver comprises a collection of entry points invoked by the OS. When writing the driver, the programmer makes assumptions about possible orders in which its entry points are going to be activated. These assumptions are not explicit in the driver source code and can only be discovered by combining the driver with a hand-written model of the OS kernel that systematically generates all possible sequences of driver invocations [2]. This phenomenon, when the control flow of the program is scattered across multiple entry points and cannot be reconstructed from its source code is known as stack ripping [1]. Building accurate OS models has proved difficult, as even OS designers often lack a good understanding of what driver-OS interactions are allowed in practice [2]. Even in the presence of an accurate OS model, analysis of the driver control flow can be tricky. The following code fragment, showing two driver entry points, illustrates the problem: ```c int suspend(){dev_suspend(); free(p);...} void unplug(){...p->data=0;...} ``` This code incorrectly assumes that the `unplug()` entry point cannot be called after `suspend()` and therefore it is safe to deallocate pointer `p` inside `suspend()`. The bug can be discovered with the help of an OS model that simulates, among others, the problematic sequence of invocations and by using pointer analysis to detect the use-after-free pattern on pointer `p`. This approach does not scale well, because pointer analysis quickly gets intractable for code involving complex pointer manipulation. In this example the root cause of the bug—the fact that the driver does not expect `unplug()` to happen after `suspend()`—is implicit in the source code of the driver. Instead we can only detect the bug’s indirect consequence, e.g., an invalid pointer dereference. Concurrency The OS can invoke driver entry points from multiple concurrent threads, forcing driver developers to implement complex synchronisation logic to avoid races and deadlocks. To detect concurrency bugs, the OS model used in driver verification must be extended to simulate the multithreaded environment of an OS kernel [20]. Even with such a model, concurrent verification is far less scalable than sequential verification, as thread interleaving leads to dramatic state explosion. Case study We demonstrate the adverse effects of stack ripping and concurrency on driver verification through a real-world case study involving the Linux driver for the RTL8169 Ethernet controller. We analyse the history of bug fixes made to this driver, and identify those fixes that address OS interface violation bugs, where the driver incorrectly responds to certain sequences of OS requests. We found 12 such bugs. We apply SATABs, a state-of-the-art model checker for C, to detect these bugs. SATABs has been successfully applied to Linux drivers in the past [20]. In this paper, we also use it for analysis of active drivers and, as reported in Section 6, this enables efficient, automatic verification of RTL8169 and other drivers. Using SATABs as a model checker for both active and traditional drivers provides a fair comparison. Detecting driver bugs with SATABs requires a model of the OS. We built a series of such models of increasing complexity so that each new model reveals additional errors but introduces additional execution traces and is therefore harder to verify. This way we explore the best-case scenario for the passive driver verification methodology: using our knowledge of the error we tune the model for this exact error. In practice more general and hence less efficient models are used in driver verification. In addition, to make sure that our study is not biased, we optimised our OS models for the best SATABs performance. To this end we analysed spurious counterexamples generated by SATABs and restructured the models to avoid such counterexamples or added static predicates to eliminate the counterexamples whenever possible (see Section 4 for more details on SATABs). Our initial model is purely sequential. It generates one call to a driver entry point at a time and waits for the invocation to complete before choosing the next entry point to call. By gradually improving this simple model, we were able to find 7 out of 12 bugs within 5 minutes each. The remaining errors are race conditions that are only triggered by concurrent driver invocations. In order to model concurrency while minimising state explosion, we simulate a limited form of concurrency where driver entry points can be invoked in a nested fashion whenever the driver calls an OS callback. While this simplified concurrency model is powerful enough to trigger the remaining 5 errors, SATABs was only able to find one of them before being interrupted after 12 hours. This analysis illustrates that (1) building an accurate OS model suitable for driver verification is a difficult task, and (2) an accurate OS model can be prohibitively expensive to verify. 2.2 Active drivers In contrast to passive drivers, an active driver runs in the context of its own thread or threads. Communication between driver threads and other OS threads occurs via message passing. The OS sends I/O requests and interrupt notifications to the driver using messages. A message can carry a payload consisting of a number of typed arguments, determined by the message type. When the driver is ready to receive a message, it performs a blocking wait specifying one or more message types that it can accept in the current state. It notifies the OS about a completed request via a reply message. Hence, the order in which the driver handles and responds to OS requests is defined explicitly in its source code and can be analysed without requiring an OS model and without running into state explosion due to thread interleaving. Active driver framework We present our instantiation of the active driver architecture. Our design is based on the Dingo active driver framework for Linux [19], improving upon it in two ways. First, Dingo’s message passing primitives are implemented as C language extensions. In contrast, our framework supports drivers in pure C. Second, Dingo does not support automatic driver protocol verification. In our framework, the driver-OS interface consists of a set of mailboxes, where each mailbox is used for a particular type of message. The driver exchanges messages with the OS via EMIT and AWAIT primitives, that operate on messages and mailboxes. The EMIT function takes a pointer to a mailbox, a message structure, and a list of message arguments. It places the message in the mailbox and returns control to the caller without blocking. The AWAIT function takes references to one or more mailboxes and blocks until a message arrives in one of them. It returns a reference to the mailbox containing the message. A mailbox can queue multiple messages. AWAIT always dequeues the first message in the mailbox. This message is accessible via a pointer in the returned mailbox. An active driver can consist of several threads that handle different activities. For example, our active driver for a SATA controller (see Section 5) creates a thread per SATA port. Each driver thread registers one or more message-based interfaces, along with associated protocols, with the OS. This allows us to verify protocol compliance for each thread in isolation. In doing so, we ignore potential race conditions and deadlocks between threads inside the driver. This is acceptable, as our goal is bug finding and not complete formal verification. Previous research [19] has shown that active drivers can benefit from cooperative thread scheduling. The performance of most drivers is bound by I/O bandwidth rather than CPU speed, therefore they do not require true multiprocessor parallelism. Cooperative scheduling limits the number of possible thread interleavings, making drivers easier to write and simplifying verification of properties involving multiple threads. Our framework supports cooperative scheduling of threads within a driver (however, driver threads are scheduled preemptively with respect to other drivers and the rest of the kernel). In the future the framework can be easily extended to enable preemptive scheduling for those drivers that can take advantage of true parallelism. Example We use an example to illustrate how the active driver architecture facilitates driver development and verification. Figure 1(a) shows a fragment of driver code that matches the example of a driver bug in Section 2.1. In Figure 1, suspend, unplug, suspend_complete, and resume are pointers to driver mailboxes. In line 1 the driver waits for both suspend and unplug requests. After receiving a suspend request (checked by the condition at line 2) the driver puts the device in a low-power mode (line 3), deallocates pointer p (line 4) and notifies the OS about completion of the request by sending a message to the suspend_complete mailbox (line 5). It then waits for a resume request at line 7. This implementation has an equivalent bug to the one found in the passive version of the driver: it does not handle hot-unplug notifications after receiving a suspend request. A correct implementation must wait on both resume and unplug mailboxes at line 7. Otherwise the driver can deadlock waiting for a resume message that never arrives. In the active version of the driver, requests that the driver accepts or ignores in each state are explicitly listed in the driver source code. As a result, the root cause of the bug can be discovered using control flow analysis, without resorting to pointer analysis. The code in Figure 1(a) is longer than the original passive implementation, because all OS interactions are initiated by the driver explicitly. In our experience, this verbosity makes the logic of the driver easier to follow while having only modest effect on the overall driver size. 3 Specifying driver protocols This section presents our visual formalism for specifying active driver protocols. The formalism is similar to protocol state machines of Dingo [19] and Singularity [10]; however it provides additional means to capture liveness and fairness constraints, which enable the detection of additional types of driver bugs. The active driver framework associates a protocol with each driver interface. The protocol specifies legal sequences of messages exchanged by the driver and the OS. Protocols are defined by the driver framework designer and are generic in the sense that every driver that implements the given interface must comply with the associated protocol. In the case when the active driver framework is implemented within an existing OS, active driver protocols are derived from the existing driver interface supported by the OS. The framework includes wrapper components that perform the translation between the native function-based interface and message-based active driver protocols. We specify driver protocols using finite state machines (FSMs) with additional liveness and fairness constraints. The protocol state machine conceptually runs in parallel with the driver: whenever the driver sends or receives a message that belongs to the given protocol, this triggers a matching state transition in the protocol state machine. Figure 1(b) shows a state machine for the protocol used by the example driver, describing the handling of power management and hot unplug requests. Each protocol state transition is labelled with the name of the mailbox through which the driver sends (‘!’) or receives (‘?’) a message. We represent complex protocol state machines compactly using Statecharts, which organise states into a hierarchy so that several primitive states can be clustered into a super-state. For example, the CONNECTED super-state in Figure 1(b) groups four primitive states with a common property that the ?unplug transition is enabled in all of them. In some protocol states the OS is waiting for the driver to complete a request. The driver cannot remain in such a state indefinitely, but must eventually leave the state by sending a response message to the OS. Such states are called timed states and are labelled with the clock symbol in Figure 1(b). In order to ensure that the driver does not deadlock in an AWAIT statement, the developer must rely on an additional assumption that if the driver waits for all incoming OS messages enabled in the current state, then one of them will eventually arrive. This is a form of weak fairness constraint [14] on the OS behaviour, which means that if some event (in this case, arrival of a message) is continuously enabled, it will finally occur. Not all protocol states have the weak fairness property. In the protocol state machine, we show weakly fair states with dashed border. For example, the SUSPENDED state in Figure 1b is weakly fair, which guarantees that at least one of resume and unplug messages will eventually arrive in this state. A protocol-compliant device driver must obey the following 5 rules. Rule 1. (EMIT) The driver is allowed to emit a message to a mailbox iff this message triggers a valid state transition in the protocol state machine. The next rule ensures the driver does not ignore an incoming message forever. Rule 2. (AWAIT) When in a state where there is an enabled incoming message, the driver must eventually issue an AWAIT on the corresponding mailbox or transition into a state where this message is not enabled. Rule 3. (Deadlock-freedom) All AWAIT operations eventually terminate. To show that Rule 3 holds for an AWAIT statement, one must check that at this point in the program at least one protocol of the driver is in a weakly fair state and the AWAIT waits for all enabled messages of this protocol. This check requires simultaneous consideration of all driver protocols. For efficiency reasons we want to be able to verify one protocol at a time. Therefore, we replace Rule 3 with the weaker rule, which does not guarantee deadlock freedom, but detects most deadlocks in practice: Rule 3'. For each protocol of the driver and for each AWAIT operation, if the AWAIT waits for at least one message of this protocol, it waits for all of its messages enabled in the current state. Rule 4. (Timed) The driver must not remain in a timed state forever. Rules 1, 3 and 5 describe safety properties, whose violation can be demonstrated by a finite execution trace. Rules 2 and 4 are liveness rules, for which counterexamples are infinite runs. Rule 2 is violated by an infinite run which, after a finite number of steps, reaches a point where an incoming message m is enabled, and remains enabled in all subsequent steps, but is never waited for by the driver. Rule 4 is violated by an infinite run if, after a finite number of steps, the protocol state machine enters a timed state, and then remains in that timed state in all subsequent steps. Checking liveness properties often requires making further assumptions on the OS behaviour. Figure 2 illustrates this using a fragment of the SATA driver protocol. According to this protocol, the driver can send a host_activate message to the OS, which responds with a host_activate_cmp1 message. While handling the host_activate request, the OS can issue one or more port_start requests to the driver. The OS guarantees that the host_activate_cmp1 message is eventually received if the driver keeps waiting for it infinitely often in the ACTIVATING state. This ensures that the driver does not get stuck in the loop between ACTIVATING and PORT_STARTING states forever. This type of fairness, where some event is guaranteed to happen as long as it is enabled infinitely often (but not necessarily continuously), is called strong fairness [14]. The strongly fair transition is marked with double arrows in the state diagram. 4 Verifying driver protocols Active driver architecture opens the way to more efficient verification of driver protocols. This section outlines the challenges that had to be overcome to achieve this and the resulting verification methodology. The goal of driver protocol verification is to check whether the driver meets all safety and liveness requirements assuming fair OS behaviour. We use two tools to this end: SATABS [7], geared towards safety analysis, and GOANNA [11], geared towards liveness analysis. These tools provide complementary capabilities that, when combined, enable full verification of driver protocols. Given an active device driver and the set of protocols it implements, we use SATABS to check safety rules 1, 3, and 5 and GOANNA to check liveness rules 2 and 4. This combination works well in practice, yielding a low overall false positive rate. Our methodology is compatible with other similar tools. We use SATABS and GOANNA because our team is familiar with their internals and has the expertise required to implement novel optimisations to improve performance on active driver verification tasks. 4.1 Checking safety SATABS is an abstraction-refinement based model checker for C and C++ for checking safety properties. It is designed to perform best when checking control-flow dominated properties with a small number of data dependencies. Active driver protocol-compliance safety checks fall into this category. Given a program to verify, SATABS iteratively computes and verifies its finite-state abstraction with respect to a set of predicates over program variables. At each iteration it either terminates (by discovering a bug or proving that the program is correct) or generates a spurious counterexample. In the latter case, the counterexample is analysed by the tool to discover new predicates, used to construct a refined program abstraction. Abstraction and refinement are both fully automatic. We use a simple driver protocol shown in Figure 3a and a fragment of driver code that implements this protocol in Figure 3b as a running example to illustrate the use of SATABS. SATABS verifies program properties expressed as source code assertions. We encode rules 1 and 3 as assertions embedded in modified versions of AWAIT and EMIT. Figure 3c shows the driver code with AWAIT and EMIT functions encoding Rule 1 inlined. These functions keep track of the protocol state using the global state variable. The AWAIT function simulates the receiving of a message by randomly selecting one of incoming mailboxes enabled in the current state (line 5) and updating the state variable based on the current state and the message selected. The assume(0) statement in line 11 tells SATABS that this branch can never be reached; hence no other messages are allowed by the protocol. Similarly, the EMIT function updates the state variable based on the current state and the message being sent. It contains an assertion that triggers an error when the driver is trying to send a message that is not allowed in the current state. Note that the m3==m3 tautology in line 16 is a result of inlining the body of EMIT, which compares its first argument against m3. To verify rule 5, we append to the driver’s main function a check to ensure that, if the driver does terminate, the protocol state machine is in a final state. In our running example, the abstraction refinement loop terminates after discovering predicates \( p_1 \equiv (\text{state} == 1) \) and \( p_2 \equiv (m == m1) \). The abstraction of the program in Figure 3c with respect to these two predicates is shown in Figure 3d. The abstract program has the same structure as the concrete one; however it only keeps track of the predicate variables, abstracting away the rest of the driver state. Using this pair of predicates (but not any one of them separately), SATABS is able to verify that this abstract program can not trigger the assertion; hence the original concrete program is correct with respect to the safety property being checked. Our preliminary experiments show that straightforward application of SATABS to active drivers results in very long verification times. This is in part due to the complexity of driver protocols being verified and in part because predicate selection heuristics implemented in these tools introduce large numbers of unnecessary predicates, leading to overly complex abstractions. The problem is not unique to SATABS. Our preliminary experience with SLAM, another state-of-the-art abstraction-refinement tool produced similar results. We describe several novel strategies that exploit the properties of active drivers to make their safety verification feasible. We believe that these techniques will also be useful in other software protocol verification tasks. ### Protocol decomposition The abstraction-refinement technique is highly sensitive to the size of the property being checked. Complex properties require many predicates. Since verification time grows exponentially with the number of predicates, it is beneficial to decompose complex properties into simple ones that can be verified independently. We decompose each driver protocol state machine into a set of much simpler subprotocols as a preprocessing step. Every subprotocol captures a simple rule derived from the main protocol. For instance, given the protocol in Figure 1(b), we can define, among others, the following rules, related to the unplug message: (1) the driver must be prepared to handle the unplug message at any time; (2) the driver must respond to the unplug message by sending unplug_complete to the OS. These rules are captured by the two subprotocol state machines at the far left in Figure 4. The next rule (the third protocol from the left) describes the occurrence of the suspend message: suspend can arrive in the initial state, is reenabled by the resume_complete message, and is permanently disabled by the unplug message. The complete decomposition consists of six subprotocols shown in Figure 4. This decomposition is constructed in such a way that each subprotocol describes the occurrence of a single type of message, shown in bold italics in the diagram. Any other message is allowed to occur in any state of the subprotocol, as it is constrained by a separate subprotocol. The decomposition has an important property that the driver satisfies safety constraints of the original protocol if and only if it does so for each protocol in the decomposition. Formally, this is achieved by ensuring that parallel product of subprotocol state machines is equivalent to the original protocol state machine. In our experience, even complex driver protocols allow decomposition into simple subprotocols with no more than four states and only a few transitions. Verifying each subprotocol requires a small subset of predicates involved in checking the monolithic protocol, leading to exponentially faster verification. **Automatically provide key predicates** One way to speed-up the abstraction-refinement algorithm is to seed it with a small set of key predicates that allow refuting large families of counterexamples. Guessing such key predicates **in general** is extremely difficult. In case of active driver verification, an important class of key predicates can be provided to SATABS automatically. As illustrated in Figure 3c, when checking a driver protocol, we introduce a global variable that keeps track of protocol state. In verifying the protocol, SATABS eventually discovers predicates over this variable of the form \((\text{state}==1), (\text{state}==2), \ldots\), one for each state of the protocol (e.g., predicate \(p_1\) in Figure 3d). These predicates are important to establishing the correspondence between the driver control flow and the protocol state machine. We therefore provide these predicates to SATABS on startup, which accelerates verification significantly. **Control-flow transformations** We found that it often takes SATABS many iterations to correlate dependent program branches. For example, the else-if branch in line 9 and the if-branch in line 13 in Figure 3c cannot both be taken in the same run of the driver. SATABS does not know about this correlation initially, leading to a spurious counterexample trace that takes both branches and triggers the assertion in line 19. This counterexample can be refuted using predicate \(p_2 \equiv (m == m_1)\). In practice, however, the driver can contain hundreds lines of code between the condition in line 13 and the failed assertion. This leads to a large number of similar spurious counterexample traces. In analysing these traces, SATABS introduces many predicates that only refute a subset of these counterexamples before discovering \(p_2\), which allows refuting all of them. This problem frequently occurs in active drivers when the driver **AWAITs** on multiple mailboxes and then checks the returned value. To remedy the problem, we have implemented a novel control-flow graph transformation that uses static analysis to identify correlated branches, and merges them. The analysis identifies, through inspecting the use of the **AWAIT** function, where to apply the transformation. Then infeasible paths through each candidate region are identified by generating Boolean satisfiability queries which are discharged to a SAT solver. The CFG region is then rewritten to eliminate infeasible paths. The effect of the rewriting on the CFG is shown in Figure 5. Figure 3e shows the driver after the transformation. This version of the driver can be verified using only predicate \(p_1\). This technique effectively avoids the expensive search for additional predicates using much cheaper static program analysis. In our experiments, SATABS performs orders of magnitude more effectively over the new program structure, being able to quickly infer key predicates that could previously only be inferred after many abstraction refinement iterations and the inference of many redundant predicates. ### 4.2 Checking liveness As SATABS is restricted to analysis of safety properties, the GOANNA tool comes into play for analysis of liveness properties. GOANNA is a C and C++ bug finding tool that supports user-defined rules written in the CTL temporal logic [9], which allows natural specification of safety and liveness properties. Unlike SATABS, GOANNA is intended as a fast compile-time checker and therefore does not perform data-flow analysis. Properties to be checked for each protocol are extracted from the protocol specification. In particular, we apply the **AWAIT** rule to every incoming mailbox and the **Timed** rule to every timed state of the protocol. Describing a temporal property using the GOANNA specification language involves two steps. First, we identify a set of important program events related to the property being verified, such as sending and receiving of messages. We use syntactic pattern matching to label program locations that correspond to these events. Second, we encode the property to be checked as a temporal logic formula in a dialect of CTL, defined over events identified at the previous step. Due to limited space, we omit the details of this encoding. ### 4.3 Automation Verifying active driver protocols requires transforming protocol state machines into a representation supported by the verification tools. These transformations include: protocol decomposition, encoding of safety properties into C assertions, and encoding of liveness properties into the GOANNA specification language. These transformations can be automated in a straightforward way, but their automation will require significant additional implementation effort. Because our resources are limited, in all experiments cited in this paper protocol transformations were performed manually, providing a large proof-of-concept for our approach. 5 Implementation We implemented the active driver framework along with several active device drivers in Linux 2.6.38. The framework consists of loadable kernel modules and does not require any changes to other kernel components. The generic part of the framework shared by all active drivers provides support for scheduling and message passing. It implements the cooperative domain abstraction, which constitutes a collection of cooperatively scheduled kernel threads hosting an active driver. Threads inside the domain communicate with the kernel via a shared message queue. The framework guarantees that at most one thread in the domain is runnable at any time. The thread keeps executing until it blocks in the AWAIT function. AWAIT checks whether there is a message available in one of the mailboxes specified by the caller and, if so, returns without blocking. Otherwise it calls the thread dispatcher function, which finds a thread for which a message has arrived. The dispatcher uses the kernel scheduler interface to suspend the current thread and make the new thread runnable. In the future this design can be optimised by implementing native support for lightweight threads in the kernel. EMIT and AWAIT functions do not perform memory allocation and therefore never fail. This simplifies driver development, as the driver does not need to implement error handling logic for each invocation of these ubiquitous operations. On the other hand this means that the driver is responsible for allocating messages sent to the OS and deallocating messages received from the OS. By design of driver protocols, most mailboxes can contain at most one message, since the sender can only emit a new message to the mailbox after receiving a completion notification for the previous request. Such messages can be pre-allocated statically. Interrupt handling in active drivers is separated into top and bottom halves. The driver registers with the framework a top-half function that is invoked by the kernel in the primary interrupt context (outside the cooperative domain). A typical top-half handler reads the interrupt status register, acknowledges the interrupt in the device, and sends an IRQ message to the driver. The actual interrupt handling happens inside the cooperative domain in the context of the driver thread that receives the IRQ message. IRQ delivery latency can be minimised by queueing interrupt messages at the head of the message queue; alternatively interrupts can be queued as normal messages, which avoids interrupt livelock and ensures fair scheduling of interrupts with respect to other driver tasks. In addition to the generic functionality described above, the active driver framework defines protocols for supported classes of drivers and provides wrappers to perform the translation between the Linux driver interface and message-based active driver protocols. Wrappers enable conventional and active drivers to co-exist within the kernel. Active driver protocols are derived from the corresponding Linux interfaces by replacing every interface function with a message or a pair of request/response messages. While multiple function calls can occur concurrently, messages are serialised by the wrapper. Since Linux lacks a formal or informal specification of driver interfaces, deriving protocol state machines often required tedious inspection of the kernel source. On the positive side, we found that, compared to building an OS model as a C program, state machines provide a natural way to capture protocol constraints and are useful not only for automatic verification, but also as documentation for driver developers. Table 1 lists protocols we have specified and implemented wrappers for. For each protocol, it gives the number of protocol states and transitions, and the number of subprotocols in its decomposition (see Section 4.1). The PCI protocol provides access to OS services used to manage a device on a PCI bus, including configuration, hot plugging, and power management. Ethernet, SATA, and SCSI protocols describe services that network and stor- Table 1: Implemented active driver protocols. <table> <thead> <tr> <th>protocol</th> <th>#states</th> <th>#transitions</th> <th>#subprotocols</th> </tr> </thead> <tbody> <tr> <td>PCI</td> <td>13</td> <td>41</td> <td>11</td> </tr> <tr> <td>Ethernet</td> <td>17</td> <td>36</td> <td>6</td> </tr> <tr> <td>SCSI</td> <td>42</td> <td>67</td> <td>-</td> </tr> <tr> <td>SATA</td> <td>39</td> <td>70</td> <td>22</td> </tr> <tr> <td>DAI</td> <td>8</td> <td>20</td> <td>6</td> </tr> </tbody> </table> Table 2: Active device driver case studies, protocols that each driver implements, and the size of the native Linux and active versions of the driver in lines of code (LOC) measured using sloccount. Table 3: Statistics for checking safety properties using SATABS. <table> <thead> <tr> <th>driver</th> <th>avg(max) time(minutes)</th> <th>avg(max) refinements</th> <th>avg(max) predicates</th> </tr> </thead> <tbody> <tr> <td>RTL8169</td> <td>29 (103)</td> <td>3 (7)</td> <td>3 (8)</td> </tr> <tr> <td>AHCI</td> <td>123 (335)</td> <td>2 (6)</td> <td>2 (19)</td> </tr> <tr> <td>OMAP DAI</td> <td>5 (13)</td> <td>2 (5)</td> <td>2 (0)</td> </tr> </tbody> </table> 6 Evaluation 6.1 Verification We applied the verification methodology described in Section 4 to RTL8169, AHCI, and OMAP DAI drivers. We did not verify the ATA framework driver due to time constraints. Verification was performed on machines with 2GHz quad-core Intel Xeon CPUs. purely syntactic analysis of GOANNA. **Comparison with conventional driver verification** An important remaining question is: how does our verification methodology compare against the conventional approach to driver verification in terms of its ability to detect real driver bugs? In Section 2 we showed, using the Linux RTL8169 driver case study, that the scalability of the traditional verification methodology for passive drivers is limited by the complexity of building an accurate OS model and the state explosion resulting from concurrency. We carried out an equivalent case study on the active version of the RTL8169 driver. To this end, we simulated the OS protocol violations found in the native Linux driver in the active driver. To reproduce concurrency-related errors in the active driver we considered message sequences that simulate thread interleavings of the conventional driver. We were able to detect each of the 12 protocol violation bugs within 3 minutes per bug. This result confirms that the active driver architecture along with the verification methodology presented above lead to device drivers that are more amenable to automatic verification than passive drivers. **Comparison with SLAM** SLAM [2] is a state-of-the-art driver verification tool used in industry to find bugs in Windows device drivers. It defines hundreds of safety rules that capture common driver safety errors. Combined with the Terminator [8] liveness checker, it can also detect liveness errors. Analysis of the SLAM rule database shows that most of the rules are similar to subprotocols obtained after decomposition of active driver protocols (see e.g. Figure 4). They describe simple properties such as "event A must happen after event B and before event C". With the exception of rules that are not applicable to active drivers, such as spinlock usage rules, all of SLAM rules can be defined as part of active driver protocols. On the other hand, not every active driver protocol rule can be defined for conventional drivers. Rules that require the driver to wait for certain protocol messages in a state do not have analogues in the SLAM rule database. This is not a limitation of SLAM, but rather a conceptual limitation of the passive driver architecture, as discussed in Section 2. Out of the 45 subprotocols in Table 1, 26 subprotocols encode such rules and thus would not be amenable to SLAM-based verification. ### 6.2 Performance **Macrobenchmarks** The performance of active drivers depends on the overhead introduced by thread switching and message passing. We measure this overhead on a machine with 2 quad-core 1.5GHz Xeon CPUs. In the first set of experiments, we measure the communication throughput by sending a stream of messages from a normal kernel thread to a thread inside a cooperative domain. Messages are buffered in the message queue and delivered in batches when the cooperative domain is activated by the scheduler. This setup simulates streaming of network packets through an Ethernet driver. The achieved throughput is \(2 \cdot 10^6\) messages/s (500 ns/message) with both threads running on the same core and \(1.2 \cdot 10^6\) messages/s (800 ns/message) with the two threads assigned to different cores on the same chip. Second, we run the same experiment with varying number of kernel threads distributed across available CPU cores (without enforcing CPU affinity), with each Linux thread communicating with the cooperative thread through a separate mailbox. As shown in Figure 6, we do not observe any noticeable degradation of the throughput or CPU utilisation as the number of clients contending to communicate with the single server thread increases (the drop between one and two client threads is due to the higher cost of inter-CPU communication). This shows that our implementation of message queueing scales well with the number of clients. Third, we measure the communication latency between a Linux thread and an active driver thread running on the same CPU by bouncing a message between them in a ping-pong fashion. The average measured roundtrip latency is 1.8 µs. For comparison, the roundtrip latency of a Gigabit network link is at least 55µs. **Microbenchmarks** We compare the performance of the active RTL8169 Ethernet controller driver against equivalent native Linux driver using the Netperf benchmark suite on a 2.9GHz quad-core Intel Core i7 machine. Results of the comparison are shown in Figure 7. In the first set of experiments we send a stream of UDP packets from the client to the host machine, measuring achieved throughput (using Netperf) and CPU utilisation (using oprofile) for different payload sizes. The client machine is equipped with a 2GHz AMD Opteron CPU and a Broadcom NetXtreme BCM5704 NIC. The active driver achieved the same throughput as the native Linux driver on all packet sizes, while using 20% more CPU in the worst case (Figure 7(a)). ![Figure 6: Message throughput and aggregate CPU utilisation over 8 CPUs for varying number of clients.](image-url) In the second set of experiments, we fix payload size to 64 bytes and vary the number of clients generating UDP traffic to the host between 1 and 8. The clients are distributed across four 2GHz Intel Celeron machines with an Intel PRO/1000 MT NIC. The results (Figure 7(b)) show that the active driver sustains up to 10% higher throughput while using proportionally more CPU. Further analysis revealed that the throughput improvement is due to slightly higher IRQ latency, which allows the driver to handle more packets per interrupt, leading to lower packet loss rate. The third set of experiments measures the round-trip communication latency between the host and a remote client with 2GHz AMD Opteron and NetXtreme BCM5704 NIC. Figure 7(c) shows that the latency introduced by message passing is completely masked by the network latency in these experiments. We evaluate the performance of the AHCI SATA controller driver and the ATA framework driver using the iotzone benchmark suite running on a system with a 2.33GHz Intel Core 2 Duo CPU, Marvell 88SE9123 PCIe 2.0 SATA controller, and WD Caviar SATA-II 7200 RPM hard disk. We run the benchmark with working set of 500MB on top of the raw disk. We benchmark both drivers, stacked on top of each other, against equivalent Linux drivers. Both setups achieved the same I/O throughput on all tests, while the active drivers’ CPU utilisation was slightly higher (Figure 8). This overhead can be reduced through improved protocol design. Our SATA driver protocol, based on the equivalent Linux interface requires 10 messages for each I/O operation. A clean-slate redesign of this protocol would involve much fewer messages. We did not benchmark the DAI driver, as it has trivial performance requirements and uses less than 5% of CPU. 7 Related work Active drivers Singularity [10] is a research OS written in the Sing# programming language. It comprises a collection of processes communicating over message channels. Sing# supports a state-machine-based notation for specifying communication protocols between various OS components, including device drivers. The Sing# compiler checks protocol compliance at compile time. Sing# extends its memory safety guarantees to message-based communication. For example, the compiler is able to verify that the program never dereferences a pointer whose ownership was passed to another process in a message. In contrast, our C-based implementation of active drivers does not assign any special meaning to pointers passed between the driver and the OS. RMoX [3] is a process-based OS written in occam-pi. RMoX processes communicate using synchronous rendezvous. Communication protocols are formalised using the CSP process algebra and verified using the FDR tool. The Dingo [19] active driver framework for Linux aims to simplify driver programming in order to help driver developers avoid errors. It relies on a C language extension to provide language-level support for messages and threads. Dingo uses a Statechart-based language to specify driver protocols; however it only supports runtime protocol checking and does not implement any form of static verification. The CLARITY [5] programming language is designed to make passive drivers more amenable to automatic verification. To this end it provides constructs that allow writing event-based code in a sequential style, which reduces stack ripping. It simplifies reasoning about concurrency by encapsulating thread synchronisations inside coord objects that expose well-defined se- quential protocols to the user. User-level drivers User-level driver frameworks for microkernel-based [17] OSs encapsulate each device driver in a separate process that communicates with other OS processes using some form of message passing. The driver thread executes an event loop that handles incoming messages by invoking appropriate driver entry points. Thus, even though the driver has its own thread of control and uses messages for external communication, internally it is based on the passive programming model and suffers from stack ripping. Verification tools Automatic verification tools for C [2, 8, 7, 13, 18] is an active area of research, which is complementary to our work on making drivers amenable to formal analysis using such tools. Any improvements to these tools are likely to further improve the speed and accuracy of active driver verification. Several verification tools, including SPIN [16], focus on checking message-based protocols in distributed systems. These tools work on an abstract model of the system that is either written by the user or extracted from the program source code [15]. Such a model constitutes a fixed abstraction of the system that cannot be automatically refined if it proves too coarse to verify the property in question. Our experiments show that abstraction refinement is essential to avoiding false positives in active driver verification; therefore we do not expect these tools to perform well on active driver verification tasks. 8 Conclusion We argue that improvements in automatic device driver verification cannot rely solely on smarter verification tools and require an improved driver architecture. Previous proposals for verification-friendly drivers were based on specialised language and OS support and therefore were not compatible with existing systems. Based on ideas from this earlier research, we developed a driver architecture and verification methodology that support drivers written in C and can be implemented in any existing OS. Our experiments confirm that this methodology enables more thorough verification of the driver-OS interface than what is possible for conventional drivers. 9 Acknowledgements We would like to thank Michael Tautschnig for his help in troubleshooting SATAbs issues. We thank the GOANNA team, in particular Mark Bradley and Ansgar Fehnker, for explaining GOANNA internals and providing us with numerous ideas and examples of verifying active driver properties using GOANNA. We thank Toby Murray for his feedback on a draft of the paper. NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Centre of Excellence program. References
{"Source-Url": "http://ts.data61.csiro.au/publications/nicta_full_text/6317.pdf", "len_cl100k_base": 9706, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45928, "total-output-tokens": 11600, "length": "2e13", "weborganizer": {"__label__adult": 0.0005884170532226562, "__label__art_design": 0.0004017353057861328, "__label__crime_law": 0.0003986358642578125, "__label__education_jobs": 0.0004646778106689453, "__label__entertainment": 9.948015213012697e-05, "__label__fashion_beauty": 0.00022423267364501953, "__label__finance_business": 0.000209808349609375, "__label__food_dining": 0.0004639625549316406, "__label__games": 0.001110076904296875, "__label__hardware": 0.01004791259765625, "__label__health": 0.0005898475646972656, "__label__history": 0.00031685829162597656, "__label__home_hobbies": 0.00013554096221923828, "__label__industrial": 0.0007176399230957031, "__label__literature": 0.0002834796905517578, "__label__politics": 0.0003170967102050781, "__label__religion": 0.0006961822509765625, "__label__science_tech": 0.08465576171875, "__label__social_life": 7.492303848266602e-05, "__label__software": 0.0107421875, "__label__software_dev": 0.88525390625, "__label__sports_fitness": 0.0003864765167236328, "__label__transportation": 0.0014467239379882812, "__label__travel": 0.0002415180206298828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52412, 0.02396]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52412, 0.22265]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52412, 0.90526]], "google_gemma-3-12b-it_contains_pii": [[0, 4072, false], [4072, 9559, null], [9559, 13706, null], [13706, 18898, null], [18898, 23606, null], [23606, 27053, null], [27053, 31976, null], [31976, 36396, null], [36396, 37828, null], [37828, 42876, null], [42876, 46412, null], [46412, 51533, null], [51533, 52412, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4072, true], [4072, 9559, null], [9559, 13706, null], [13706, 18898, null], [18898, 23606, null], [23606, 27053, null], [27053, 31976, null], [31976, 36396, null], [36396, 37828, null], [37828, 42876, null], [42876, 46412, null], [46412, 51533, null], [51533, 52412, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52412, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52412, null]], "pdf_page_numbers": [[0, 4072, 1], [4072, 9559, 2], [9559, 13706, 3], [13706, 18898, 4], [18898, 23606, 5], [23606, 27053, 6], [27053, 31976, 7], [31976, 36396, 8], [36396, 37828, 9], [37828, 42876, 10], [42876, 46412, 11], [46412, 51533, 12], [51533, 52412, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52412, 0.06593]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
fea601550a4f33167da0f00eb495de92533b5d4c
Specification of Concretization and Symbolization Policies in Symbolic Execution Robin David, Sébastien Bardin, Josselin Feist, Laurent Mounier, Marie-Laure Potet, Thanh Dinh Ta, Jean-Yves Marion To cite this version: HAL Id: hal-01721492 https://hal.univ-grenoble-alpes.fr/hal-01721492 Submitted on 2 Mar 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Specification of Concretization and Symbolization Policies in Symbolic Execution Robin David, Sébastien Bardin CEA, LIST, Saclay, France first.last@cea.fr Josselin Feist, Laurent Mounier, Marie-Laure Potet, Thanh Dinh Ta UGA, Verimag, Grenoble, France first.last@imag.fr Jean-Yves Marion Université de Lorraine, CNRS LORIA, Nancy, France first.last@loria.fr ABSTRACT Symbolic Execution (SE) is a popular and profitable approach to automatic code-based software testing. Concretization and symbolization (C/S) is a crucial part of modern SE tools, since it directly impacts the trade-offs between correctness, completeness and efficiency of the approach. Yet, C/S policies have been barely studied. We intend to remedy to this situation and to establish C/S policies on a firm ground. To this end, we propose a clear separation of concerns between C/S specification on one side, through the new rule-based description language CSml, and the algorithmic core of SE on the other side, revisited to take C/S policies into account. This view is implemented on top of an existing SE tool, demonstrating the feasibility and the benefits of the method. This work paves the way for more flexible SE tools with well-documented and reusable C/S policies, as well as for a systematic study of C/S policies. CCS Concepts •Software and its engineering → Formal software verification; Software testing and debugging; Dynamic analysis; Specification languages; Keywords automatic test generation; formal methods; symbolic execution; specification language 1. INTRODUCTION Context. Symbolic Execution (SE) [15] is a popular and fruitful formal approach to automatic (code-based) software testing. Given a path in a program, the key insight of SE is the possibility in many cases to compute a formula (a path predicate) such that a solution to this formula is a test input exercising the considered path. Then, exploring all the (bounded) paths of the program allows for intensive testing and efficient bug finding. Basis were laid in the 70’s by King [24], but the technique found a renewal of interest in the mid 2000’s when it was mixed with concrete execution [26, 21, 28] and combined with the growing efficiency of SMT solvers. SE has quickly become the most promising technique for code-based automatic test generation, leading to impressive case studies [12, 16, 2, 14] and a promise of industrial adoption at large scale [9, 22]. Its usage for security purposes have also been considered, especially because of its straightforward adaptation to binary-level analysis [22, 4, 27, 3]. SE has successfully been applied in a wide range of security applications, such as vulnerability [1, 23] or malware analysis [10]. Problem. While a purely symbolic approach is worth considering, the strength of modern SE tools is to symbolically evaluate only a (small) trace fragment. Concretization uses run-time values in order to under-approximate the path predicate, while symbolization over-approximate the path predicate by introducing new logical variables. The former allows to handle in a precise but limited way parts of an execution which are either missing (e.g., system calls) or too costly to reason about (e.g., cryptographic functions), while the latter allows to generalize certain program steps, keeping the reasoning exhaustive but less precise. Actually, choices of concretization and symbolization (C/S) are a crucial part of modern SE tools, together with path predicate computation and path selection. Yet, while the latter are either well-understood (path predicate) or under active research efforts (path selection), C/S policies have been much less studied. Especially, design choices behind implemented C/S policies are often barely explained, and most SE tools either propose only hard-coded C/S policies, or give full control on a line per line manner in the code [18]. Goal and contribution. We propose to address these problems through a clear separation of concerns between (1) a specification mechanism for C/S policies in SE, and (2) a new SE algorithm parametrized by an arbitrary C/S policy. The main contributions of this paper are the following: • We formalize what a C/S policy is and we revisit the standard path predicate computation algorithm taking C/S policy into account (Section 4.1). This is the first time such a parametric view of the core algorithm behind SE is provided. We clearly show where the C/S policy matters and we discuss correctness issues. • We propose CSml, a rule-based specification language for defining C/S policies, together with its semantics *Work partially funded by ANR, grant 12-INSE-0002. (Sections 4.2 to 4.4). The language is simple, yet powerful enough to encode standard C/S policies. Again, correctness issues are discussed. • As a first application, we perform an extensive literature review, and we show how to encode existing C/S policies into CSstt, highlighting in some cases subtle differences between similar policies (Section 4.5). • As a second application, these results have been implemented on top of the Bissec framework [19, 20], yielding the first SE tool with fully customizable C/S policies through high-level specifications (Section 5). First experiments demonstrate that the overhead induced by this genericity is very low (Section 6.2). We also show an example of an original C/S policy fine-tuned for vulnerability detection (Section 6.3). • Finally, we present the first quantitative comparison of C/S policies (Section 6.1), focused on policies dedicated to the handling of memory operations. We compare five policies on 169 programs. We found that, while new policy PP* performs better on most examples, there is still a high variability of results between the policies depending on the considered example. This is a strong a posteriori argument for a generic C/S specification mechanism. Outcome. This work proposes a clear separation of concerns between the core SE algorithms and C/S specification, paving the way for flexible SE tools with easy to configure C/S policies. Additional benefits include: (1) better documented policies, facilitating their understanding, comparison and reuse; (2) the systematic study of concretization and symbolization (including both analytic and quantitative analysis) in order to better understand their impact and to identify interesting trade-offs; and finally (3) the fine-tuning of dedicated policies tailored to specific programs or needs. 2. BACKGROUND 2.1 Notation Given a program $P$ over a vector $V$ of $m$ input variables taking values in a domain $D \triangleq D_1 \times \ldots \times D_m$, a test datum $t$ for $P$ is a valuation of $V$, i.e. $t \in D$. The execution of $P$ over $t$, denoted $P(t)$, is a path (or run) $\sigma \triangleq (l_1, \Sigma_1) \ldots (l_n, \Sigma_n)$, where the $l_i$ denote control-locations (or simply locations) of $P$ and the $\Sigma_i$ denote the successive internal states of $P$ (approximation of all global and local variables as well as memory-allocated structures) before the execution of each $l_i$. 2.2 Symbolic Execution in brief We recall here a few basic facts about Symbolic Execution (SE) [24] and Dynamic Symbolic Execution (DSE) [21, 26, 28]. Let us consider a program under test $P$ with input variables $V$ over domain $D$ and a path $\sigma$ of $P$. The key insight of SE is that it is possible in many cases to compute a path predicate $\phi_\sigma$ for $\sigma$ such that for any input valuation $t \in D$, we have: $t$ satisfies $\phi_\sigma$ if $P(t)$ covers $\sigma$. Such a path predicate is said to be both correct and complete, where correctness denotes the left-to-right implication (a solution does cover the intended path) and completeness denotes the right-to-left implication (any input covering the path is a solution). A path predicate is intuitively the logical conjunction of all branching conditions and assignments encountered along that path. Figure 1 presents a simple program path (two assignments and a branching condition $x > 10$ taken to true) together with three possible path predicates. It is straightforward to check that $\phi_1$ is correct and complete (a valuation of $a$ and $b$ satisfies $\phi_1$ iff its execution satisfies the assertion), while $\phi_2$ is correct but incomplete because of the additional constraint $a = 5$ and $\phi_3$ is complete but incorrect because of the removal of constraint $x_1 = a \times b$. <table> <thead> <tr> <th>Program</th> <th>Path predicate</th> <th>Concretization</th> <th>Symbolization</th> </tr> </thead> <tbody> <tr> <td>$a = 5$</td> <td>$\land x = 5 \times b$</td> <td>$\land x = 5 \times b$</td> <td>$x_1 = fresh$</td> </tr> <tr> <td>$x = x + 1$</td> <td>$\land x = 1 + 1$</td> <td>$\land x = 1 + 1$</td> <td>$x_2 = x + 1$</td> </tr> <tr> <td>$x = a \times b$</td> <td>$\land x &gt; 10$</td> <td>$\land x &gt; 10$</td> <td>$x_2 &gt; 10$</td> </tr> <tr> <td>//assert $x &gt; 10$</td> <td>$\sigma_1$</td> <td>$\sigma_2$</td> <td>$\sigma_3$</td> </tr> </tbody> </table> $\phi_1$ is a correct and complete path predicate, $\phi_2$ is obtained through concretization of $a$ (assuming its runtime value is 5) and $\phi_3$ through symbolization of $x$ on first line (fresh is a new unconstrained variable). Figure 1: Path predicate, concretization and symbolization In practice, path predicates are often under-approximated and only correctness holds, which is fine for testing: SE outputs a set of pairs $((t_i, \sigma_i))$ such that each $t_i$ is ensured to cover the corresponding $\sigma_i$. DSE enhances SE by inter-leaving concrete and symbolic executions. The dynamically collected information can help the symbolic step, for example by suggesting relevant approximations (cf. Section 2.4). A high-level view of SE is depicted in Algorithm 1. We assume that the set of paths of $P$, denoted $Paths(P)$, is finite. The algorithm builds iteratively a set of tests by exploring all the feasible paths. Algorithm 1: Symbolic Execution algorithm Input: a program $P$ with finite set of paths $Paths(P)$ Output: $TS$ a set of pairs $(t, \sigma)$ such that $P(t)$ covers $\sigma$ 1 $TS := \emptyset$; 2 $S_paths := Paths(P)$; 3 while $S_paths \neq \emptyset$ do 4 \hspace{1em} choose $\sigma \in S_paths$; $S_paths := S_paths \setminus \{\sigma\}$; 5 \hspace{1em} compute path predicate $\phi_\sigma$ for $\sigma$; 6 switch $solve(\phi_\sigma)$ do 7 \hspace{2em} case $sat(t)$: $TS := TS \cup \{(t, \sigma)\}$; 8 \hspace{2em} case $unsat$: skip; 9 \hspace{2em} endsw 10 end 11 return $TS$ The three major components of the SE algorithm are the following: (1) path selection strategy (line 4); (2) path predicate computation, with predicates in some theory $T$; (3) satisfiability checking, using a solver taking a formula $\phi \in T$ and returning either $sat$ with a solution $t$ or $unsat$. We focus in this article on path predicate computation, which is where C/S policy matters most. Note that the effective solvability of a path predicate may depend a lot on its construction. From now on, we do not distinguish between SE and DSE. 1This assumption is enforced through a bound on paths. 2SE tools typically rely on off the shelf SMT solvers. 2.3 Path predicate computation In order to remain both general and concrete, we present path predicate computation on a small core language well-suited to low-level analysis. We choose DBA [20, 6] that has been used in several binary-level analyzers [6, 20, 7, 5]. The core language is presented in Table 1, where Var denotes a set of variables (typically: registers) and Val denotes the set of values, here bitvectors of statically known size. The program is represented as a map from (control) locations to instructions. Operator @ represents both reads at and writes to a distinguished variable Mem (modeling the memory), depending whether they are in a lhs (write) or in an expression (read). All basic bit-level operations are available, including machine arithmetic and bitwise logical operators. The set of instructions includes assignments, static jumps, computed jumps (with an implicit cast from Val to Loc), conditional jumps and a stop operation. We denote by l, v and bv some elements of Loc, Var and Val. | Program ::= ε | stmt program | Stmt ::= < l, inst > | Inst ::= lhs := expr | Expr ::= expr @ expr | φu expr | φu expr @ expr | φu ::= ¬ | x_{u,s} | f_{u,s} | ≤_{u,s} | ⊕ ... | Table 1: DBA instructions In the sequel Instr denotes the set of instructions and Expr the set of expressions. The map from locations to instructions is denoted Δ : Loc → Instr. The operational semantics is given in a standard way, each instruction updates a concrete memory state Σ and moves control to the next instruction. Here, Σ is a total function mapping each variable v ∈ Var to a value bv ∈ Val (respecting size constraints), and mapping variable Mem to an array from addresses (values of size addr_size) to bytes (values of size 8). Path predicate computation. We denote by Σ∗ the symbolic memory state which maps all variables v ∈ Var to symbolic values φ (logical terms on logical variables ranging over Val) and the distinguished variable Mem to a logical array from addresses to bytes. The path predicate φ is a first-order logic formula over logical variables and logical arrays. At a given point of execution, the internal state of the algorithm is composed of l, Σ∗, φ, respectively a location, a symbolic memory state and a (current) path predicate. The algorithm starts from the initial location l0, the initial state Σ0, associating a fresh logical variable to each program variable and a fresh logical array to Mem, and φ0 ≜ true. It proceeds instruction by instruction along the execution, then the computed path predicate φ is returned. Recall that the execution trace being fixed, the successor of each branching instruction is known. The path predicate computation algorithm over DBA is given in Figure 2, where → represents path predicate computation itself while →c represents the symbolic evaluation of a DBA expression. Recall that φ (resp. ϕ) denotes a symbolic value (resp. a formula). For instance, rule var Σ, v = v | @ expr states that the symbolic evaluation of variable v is the symbolic value stored for v in the current symbolic memory state Σ∗, denoted Σ∗(v). Rule l0 → goto l allows the symbolic evaluation of a dynamic jump branching to location l0. The rule reads as follows: expression e is symbolically evaluated into the symbolic value φ_e, l0 is converted to a concrete address with to_val : Loc → Val and the constraint to_val(l0) = φ_e is added to the path predicate being built, modeling the fact that the execution flow must lead to l0. Remaining unexplained symbols are: - the symbols gathered into φ_u^0 and φ_u are the logical counterparts of the unary and binary symbols of concrete expressions, e.g. “+” is evaluated to the logical operator bvadd of the bitvector theory; - select/store are the standard logical operators from the theory of arrays, representing reads at and writes to specific array indexes; - “fresh” designates a new logical variable in the formula. Property 1. The path predicate computation algorithm of Figure 2 is correct and complete, i.e. it returns a correct and complete path predicate. 2.4 Concretization & Symbolisation In practice, performing a fully symbolic path predicate computation as shown in Figure 2 is not necessarily feasible for various reasons: unavailable parts of the code, presence of an environment, concrete operators outside the scope of the underlying solver, etc. That is why concretization and symbolization were introduced into symbolic execution [26, 21, 18] (cf. Figure 1 for examples): Concretization uses run-time values in order to under-approximate the path predicate, allowing to handle in a precise but limited way parts of an execution which are either missing (e.g., system calls) or too costly to reason about. For instance, concretizing read and write addresses significantly reduces the complexity of the path predicate since the theory of arrays is computationally hard to solve. Symbolization over-approximates the path predicate by introducing a fresh logical variable, allowing to generalize certain program steps, keeping the reasoning exhaustive but less precise. For example, symbolizing eax after a system call is a good way to simulate all possible return values of the call. Propagation computes the path predicate as explained in Section 2.3, without any extra-approximation. Both concretization and symbolization allow to make SE more robust to real programs, yet they come at the price of losing either completeness (concretization) or correctness (symbolization). The decision upon which a value is concretized or symbolized is in general hard-coded inside the path predicate computation, and many alternative choices exist in the literature (cf. Section 7), more or less documented. Our goal is precisely to design a flexible and clear specification mechanism for such a decision. 3. MOTIVATIONS 3.1 The case for clear C/S policy specification Let us consider an instruction \( x := \& (a * b) \) (recall that \( \& \) denotes a dereferencing operator), and a policy stating that read expressions should be concretized. Let us assume that runtime values are 7 for \( a \) and 3 for \( b \). Then, there are at least three ways of understanding this “concretization” (\( M \) is a variable representing memory): <table> <thead> <tr> <th>Policy</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>CS1: ( x := \text{select}(M, 21) )</td> <td>[incorrect]</td> </tr> <tr> <td>CS2: ( x := \text{select}(M, 21) \land a \times b = 21 )</td> <td>[minimal]</td> </tr> <tr> <td>CS3: ( x := \text{select}(M, 21) \land a = 7 \land b = 3 )</td> <td>[atomic]</td> </tr> </tbody> </table> The first formula is simple yet incorrect (we lose that \( a \times b = 21 \)), the second formula is correct and minimal w.r.t. the initial objective, and the third formula is correct but very restrictive since it imposes values for both \( a \) and \( b \) — we can see it as a atomic concretization of read expressions, affecting only variables. Note however that CS2 does not allow to get rid of the \( \times \) operator, which may be a problem if our solver does not support it, while both CS1 and CS3 do. None of these three policies is clearly better, it all depends on the context. Yet, which policy was intended? This example demonstrates that C/S policies can show subtle differences, and that clear specifications are required. 3.2 The case for dedicated C/S policies Let us consider the C program presented in Figure 3, where \( x, y, \) and \( z \) are supposed to be tainted [25], i.e. on the control of the user. This program is therefore vulnerable: variable \( \text{buf} \) can be overflowed at lines 7 and 8. In both cases, the pointer \( \text{ptr} \) could be overwritten\(^3\), potentially hijacking the function call at line 9. ![Figure 2: Path predicate computation.](image) Nevertheless, only the second case (line 8) is interesting from an exploitability point of view, since the attacker can control both the value to write (z*2) and the destination address (\( \text{buf}[y] \)), while in the first case value 42 probably does not produce an executable address. ``` 1 void example1(int x, int y, int z) { 2 int val; 3 int buf[10]; 4 void (*ptr)(); 5 ptr=&fun1; //a function elsewhere 6 val=x*2; //val is tainted 7 buf[x]=42; //buf[x] is tainted, 42 is not 8 buf[y]=val; //buf[y] is not tainted 9 ptr(); 10 } ``` ![Figure 3: Motivating example](image) Let us suppose that our goal is to detect such form of weakness through DSE. Here, results will highly depend on the C/S choices on memory reads and writes. For instance, we can consider two standard C/S policies: CS4: concretize the destination address of every write operation, propagate (i.e. keep symbolic) otherwise. CS5: concretize the destination address of every write operation if not tainted, propagate otherwise. CS4 scales well since it avoids the computation of array formulas with symbolic indexes (which introduce combinatorial reasoning), but it leads to a strong under-approximation of the path predicate, possibly preventing the detection of the vulnerability. On the other hand, since CS5 keeps both \( \text{buf}[x] \) and \( \text{buf}[y] \) symbolic because they are tainted, the solver is indeed able to provide a valuation for \( x \) and \( y \) allowing to overwrite \( \text{ptr} \) and to hijack the control-flow. Yet, this policy keeps only \( x \) symbolic at line 7, leading to unnecessary computations here. A more appropriate policy would be to \[^3\]We assume here that \( \text{ptr} \) is just below \( \text{buf} \) in the stack frame, which is compiler-dependent. 4. SPECIFICATION OF C/S POLICIES Our main goal is to design a high-level specification language for C/S policies to encode the major policies found in the literature. Especially, we want to be able to distinguish between CS1, CS2 and CS3 (Section 3.1) and to express CS4 and CS5 (Section 3.2). Besides expressiveness, the following properties are also desirable: (1) a clear semantics; (2) simplicity and concision; (3) independence (Section 3.2). Besides expressiveness, the following properties are also desirable: (1) a clear semantics; (2) simplicity and concision; (3) independence (Section 3.2). Besides expressiveness, the following properties are also desirable: (1) a clear semantics; (2) simplicity and concision; (3) independence (Section 3.2). We achieve these goals through CSML, a high-level rule-based language offering pattern matching and subterm checks on the expression and the instruction being processed. As an example, the encoding of CS4 is presented in Table 2. It will be explained in Section 4.2. | * | ::= | ⟨1?e := ?x⟩ | ::= | ⟨|e|⟩ | ::= | * | ⇒ | C | ; | | default | ⇒ | P | ; | Table 2: CS4 policy Before presenting CSML together with its semantics (Section 4.2), we define formally what a C/S policy is, and how it interacts with path predicate computation (Section 4.1). 4.1 Revisiting DSE with C/S policies Formalization of C/S policies. The goal of a C/S policy is to decide whether a given expression of the execution trace must be: - concretized (C), i.e. replaced by its concrete value in the trace; - symbolized (S), i.e. replaced by a fresh (unconstrained) symbol; - or propagated (P), keeping its current value as in the standard algorithm of Figure 2. We denote by ρ = {C, S, P} the set of possible decisions, and by State the set of concrete memory states. We define a C/S policy csp_expr as a function that takes as input a location l, an instruction i, a concrete memory state Σ and an expression e, and returns a decision d ∈ ρ. Formally: \[ csp_{expr} : Loc \times Instr \times Expr \times State \rightarrow ρ \] Path predicate computation with C/S policy. The C/S policy is queried inside the path predicate computation algorithm each time an expression must be evaluated. Intuitively, the standard algorithm of Figure 2 corresponds to the case where the decision is always P (propagate current value), starting from an initial state where all variables are symbolized. A version of the path predicate computation algorithm revisited for taking C/S policies into account is presented in Figure 4. The main difference with the standard approach is that the symbolic evaluation operator τc is slightly modified into τccs to use the policy. Moreover, before evaluating an expression, csp_expr(l, i, e, Σ) is queried in order to know which action shall be performed (lower part of Figure 4): - S: the expression e is replaced by a new symbol; - P: the expression e is symbolically evaluated, applying the C/S policy on each sub-terms [with τc]c; - C: the expression e is symbolically evaluated [with τcsc], then the resulting logical expression is constrained to be equal to the concrete evaluation evalΣ(e) of e, in order to preserve correctness of concretization. Note that τc and τcsc returns now both a symbolic expression and a formula, the latter representing the constraints potentially induced by concretizing some subterms of the expression to evaluate. 4.2 Specification language for C/S policies We describe now CSML, our rule-based language whose syntax is given in Table 3. Basic principles. A rule is of the form guard ⇒ ρ where the guard allows to check if the rule should be fired (typically using pattern-matching and subterm checks) and ρ is the decision to be returned by the policy. As explained before, rules are queried with the current location, instruction, expression and concrete memory state. Namely, guards are denoted by πloc :: πins :: πexpr :: πΣ, where: - πloc is a predicate on the location, - πins is a predicate on the instruction, - πexpr is a predicate on the expression, - πΣ is a predicate on the concrete memory state. Rules are tested sequentially, and the first fireable rule returns its associated action. If no rule fires, then a default rule is applied. Inside each rule, guard predicates are also checked sequentially, so that πloc must be satisfied in order to check πexpr and so on. Note that any of these predicates can be replaced by *, which always evaluate to true. Finally, guard predicates may communicate in a limited way through metavariables and placeholders. | policy | ::= | rules default | default | | rules | ::= | rule | rule rules | | rule | ::= | guard ⇒ ρ | | default | ::= | default ⇒ ρ | | guard | ::= | πloc :: πins :: πexpr :: πΣ | | πloc | ::= | loc | [loc..loc] | * | | πins | ::= | (pinstr) | * | | πexpr | ::= | (pexpr) | expr+ ≤ term+ | term+ ≤ term+ | * | | πΣ | ::= | P(expr) | | expr+ | extended expression, allowing placeholders | | pexpr | expr, pattern, allowing placeholders and meta-variables | | pinstr | the same w.r.t. instructions | Table 3: Policy language Matching and metavariables. Predicates for πexpr and πins typically allow to check if the input expression (resp. instruction) matches a given pattern pexpr (resp. pinstr). A pattern is similar to an expression (resp. instruction), but with two additional kinds of variables. **Meta-variables** (prefixed by ?) allow to match any term. Once successfully matched, these terms become available as placeholders in the subsequent guard predicates. When a meta-variable does not need to be reused, one can employ an **anonymous meta-variable** *?. **Placeholders** (prefixed by !) take the value of their corresponding meta-variable (e.g., ?e for !e) once matched. The distinguished placeholder ! contains the current expression being processed. Given a pattern p, we denote the predicate “match p?” by (p). As an example, we concretize the expression being added to esp, in any assignment instruction: \[ \begin{align*} \ast :: (\ast := \text{esp } + \text{?e }) :: (\text{!e }) :: \ast \Rightarrow \mathcal{C} \end{align*} \] Here, ?e is defined in \( \pi_{\text{ins}} \) and then available in \( \pi_{\text{expr}} \) through !e. The rule reads as follows: if the current instruction assigns the sum of esp and some expression e to any lhs and the current expression matches e, then it should be concretized. Or, put another way: if we are evaluating an expression e in the context of an instruction where e is added to esp and assigned to some lhs, then e should be concretized. We can now understand the encoding of CS4 given in Table 2: if we are evaluating an expression e in the context of an assignment where e is used as the write address, then e is concretized, otherwise it is propagated. **Subterm.** This language is already quite expressive, but still does not allow to match a nested sub-expression depending on its context. The typical usage is, when willing to concretize a read address, we want to know if the given expression is in the scope of a read operation. This can be solved by introducing the \( \preceq \) operator (resp. \( \triangleright \)), allowing to check if an expression is a subterm (resp. strict subterm) of another one. Specifying that any read or write expression must be concretized can then be simply written as follows, where ?i matches a whole expression: \[ \begin{align*} \text{correct concretization of r/w expressions [CS2]} \\ \ast :: (\ast := \text{?i } :: (\ast i) :: \ast \Rightarrow \mathcal{C}) \end{align*} \] The rule can be read as follows: when evaluating an expression e (given by the special placeholder !e) such that the expression \( \preceq \) is a subterm of the current instruction (captured by ?i), then e should be concretized. Note that \( \preceq \) and \( \preceq \) can be applied to (extended) terms containing placeholders (expr+ and instr+) in Table 3). **Other predicates.** \( \pi_{\text{loc}} \) consists in checking that the input location l is either equal to a given location loc or within a range of locations [loc..loc']. \( \pi_{\Sigma} \) is a predicate over Expr that can query information from \( \Sigma \). For example, we can imagine performing some concretization and/or symbolization depending of the runtime value of the expression being evaluated. The interest of \( \pi_{\Sigma} \) will be lightened once we allow extended memory states (cf. Section 4.4). ### 4.3 Properties of the specification language CSML ensures interesting properties on the C/S policy being defined. In order to present them, we need a few additional definitions. A CSML rule is said to be well-defined if it is well-typed and place holders are used in an appropriate manner. Note that the well-definedness of a rule is automatically checkable. A C/S policy is well-defined if it is a total function (deterministic behavior, defined on any input), and it is correct if it yields to the computation of a correct path predicate (cf. Section 2.3). Then, by construction, we have the good following properties: **Property 2.** A set of well-defined CSML rules defines a well-defined C/S policy. \( ^{3}\text{Property 2 comes also from the sequential ordering of rules and the presence of a default rule.} \) 4.4 Advanced features We propose here a few extensions to the CSML core language, presented with less details due to space limitations. Richer decisions. We enrich the set of possible decisions by allowing a domain restriction on both $S$ and $P$, now denoted $S_D$ and $P_D$, where $D$ is an interval constraint (resp. singleton constraint) of the form $[a..b]$ (resp. $[a]$) with $a$ and $b$ expressions evaluating to bitvector values. This feature is useful for limiting the domain of a fresh logical variable, but it can also be used to encode incorrect concretization or restricted propagation (cf. below). Note that the domain does not need to be defined by constant values, for example its definition can involve runtime evaluation of bitvector expressions, through function $eval : \text{Expr} \rightarrow \text{Val}$ - incorrect concretization of r/w expressions [CS1] [23] \[ \ast :: \langle \pi \rangle :: (\langle @ ! \rangle \prec \li :: \ast \Rightarrow S_{\text{eval} \Sigma ([\pi])}) \] - restriction of r/w expressions [5] \[ \ast :: \langle \pi \rangle :: (\langle @ ! \rangle \prec \li :: \ast \Rightarrow P_{\text{eval} \Sigma ([\pi])} - 10 \cdot \text{eval} \Sigma ([\pi]) + 10) \] Richer subterm constraints. We allow chaining of subterm constraints, e.g. $e \prec pe_1 \prec \ldots \prec pe_n \prec \text{term}^+$, together with the use of anonymous meta-variables inside the $pe_i$. This allows finer subterm relationships, such as checking that an expression is a subterm of an expression pattern, used itself within another expression. - recursive concretization of r/w expressions \[ \ast :: \langle \pi \rangle :: ! \longleftarrow (\langle @ ? \ast \rangle \prec \li :: \ast \Rightarrow C \] Compared with concretization CS2 presented in Section 4.2, this rule enforces the concretization of all subterms of a r/w expression. This policy is also slightly different from atomic concretization CS3, whose encoding is shown hereafter. Richer predicates. We can also allow more predicates in the language. This can be done at two stages: either enriching the four classes of predicates already defined, or adding new classes of predicates. For the first category, it could be useful to have a var predicate indicating if a term is a variable or not. An application is to restrict concretization to atomic variables: - atomic concretization of r/w expressions [CS3] \[ \ast :: \langle \pi \rangle :: \text{var}([\pi]) \land ! \longleftarrow (\langle @ ? \ast \rangle \prec \li :: \ast \Rightarrow C \] For the second category, it could be useful to consider a predicate class $\pi_{\text{step}}$ regarding the step of the execution, allowing for example to define a C/S policy step by step in a trace-oriented manner, which may sometimes come handy. Extended memory states. In the same vein, it can be interesting to enrich the predicate $\pi_{\Sigma}$ working on a concrete memory state $\Sigma$ into a predicate $\pi_{\Sigma^+}$ working on an extended concrete memory state $\Sigma^+$. Basically, an extended concrete memory state is a concrete memory state enriched with additional runtime information collected. A typical example of such a predicate is $\overline{T}_{\Sigma^+}(e)$, indicating whether an expression $e$ is tainted or not in a given extended memory state $\Sigma^+$, using dynamic taint information [25]. Note that this extension makes the C/S policy dependent on the services provided by the underlying dynamic execution engine. While it is fair to assume that a concrete evaluation function $\text{eval}_{\Sigma}$ is available on any dynamic execution engine, more exotic queries on $\Sigma^+$ may not be available. We assume that C/S policies querying unsupported $\Sigma^+$-function (or $\Sigma^+$-predicate) are syntactically rejected. C/S through memory injection. Besides C/S at the level of symbolic evaluation, another common pattern is to enforce concretization and/or symbolization through direct modification of the symbolic memory state. This is particularly useful to handle unknown or hard-to-reason-about functions (e.g. system calls, cryptographic function) with side-effects or returning complex data structures. Note that this kind of C/S is different from the one we have considered so far, since it modifies permanently the value of a $lhs$ (inside $\Sigma^+$), while $\text{csp}_{\text{expr}}$ affects a single evaluation of any expression. For example, C/S memory injection allows to declare that at some location, variable $\text{eax}$ receives a fresh value (which will last along the trace until $\text{eax}$ is rewritten), while $\text{csp}_{\text{expr}}$ allows to declare that at some location, variable $\text{eax}$ evaluates as if it were unconstrained (with no impact on the remainder part of the trace). C/S injection can be handled similarly to C/S in expression evaluation. Due to space limitation, we only sketch the idea. We introduce a new function \[ \text{csp}_{\text{mem}} : \text{Loc} \times \text{Instr} \times \text{State} \rightarrow (\text{lhs} \mapsto \{C, S\}) \] which takes as argument a location, an instruction and a memory state, and returns a map from $\text{lhs}$ to decisions, which are here limited to $C$ and $S$. Intuitively, the map represents the modifications which have to be performed on the current symbolic memory state $\Sigma^+$ before the symbolic execution goes on. Contrary to $\text{csp}_{\text{expr}}$, concretizations defined by $\text{csp}_{\text{mem}}$ are not ensured to be correct, as the symbolic memory state is modified without any additional (correctness) constraint. Discussion. Altogether, these extensions provide a very fine control over the C/S policy, allowing for example to encode the subtle differences between correct concretization, incorrect concretization, recursive concretization and atomic concretization. 4.5 Encoding of standard C/S policies To illustrate how our language works, we show the encoding of several state-of-the-art policies from the literature, not yet covered in Sections 4.2 and 4.4. For instance, CUTE and DART [26, 21] concretizes both read and write addresses, as well as part of non-linear operations (here: left operand of any $\times$ operator). The associated policy is shown in Table 4. | $\ast :: \langle ? \pi \rangle :: (\langle @ ! \rangle \prec \li :: \ast \Rightarrow C ;$ | $\ast :: \langle ? \pi \rangle :: (\langle I \longleftarrow ? \ast \rangle \prec \li :: \ast \Rightarrow C ;$ | default $\Rightarrow P ;$ Table 4: CUTE/DART policy A variant for memory operations consists in concretizing also non-tainted expressions [23]. The corresponding policy is shown in Table 5, where \( \tilde{T}_{S^+} (e) \) indicates whether an expression \( e \) is tainted or not in a given extended memory state \( \Sigma^+ \) (cf. Section 4.4). \[ \begin{array}{llllll} * & :: & (\forall x) & :: & (\exists y) & :: & * & :: & \Rightarrow & C & ; \\ * & :: & * & :: & * & :: & \neg \tilde{T}_{S^+} (\forall x) & \Rightarrow & C & ; \\ \text{default} & & & & & & & & & & \Rightarrow & P & ; \\ \end{array} \] Table 5: CUTE/DART policy with tainting The approach followed in EXE [13] in case of multi-level dereferencement consists in concretizing all \( r/w \) expressions but the most nested one. The encoding of such a policy is shown in Table 6. \[ \begin{array}{llllll} * & :: & (\forall y) & :: & (\exists ! x) & :: & * & :: & \Rightarrow & C & ; \\ \text{default} & & & & & & & & & & \Rightarrow & P & ; \\ \end{array} \] It is important here to use \( \prec \) rather than \( \preceq \). Table 6: EXE policy Finally, the policy in Mayhem [16] consists in concretizing all write expressions while keeping read expressions symbolic as long as they cannot take too many values (otherwise, concretizing them). We need here to consider \( \Sigma^+ \) enriched with an interval analysis. The encoding is then given in Table 7, where \( \text{card}_I(e) \) gathers the number of possible values for \( e \) from the interval information available in \( \Sigma^+ \). \[ \begin{array}{llllll} * & :: & (\forall y) & :: & (\exists ! x) & :: & \text{card}_I (\forall x) < 1024 & \Rightarrow & P & ; \\ \text{default} & & & & & & & & & & \Rightarrow & P & ; \\ \end{array} \] Table 7: Mayhem policy Summary. Table 8 presents a summary of the kinds of C/S policies CSml can encode, together with the required extension of the language. It is remarkable that CSml can encode all popular C/S policies despite a limited language. Hence, we think that our rule-based language manage to capture the crucial aspects of current C/S policies. Limits. We do not know of any major existing C/S policy that cannot be encoded into CSml. Yet, the framework has some limitations, coming from both the ordered evaluation of guard predicates and the very restricted communication between those predicates. Here are two such limitations. - A C/S policy does not depend on the symbolic state we are building, for example we cannot decide to concretize a term if all its leaves (variables) are already concretized. - A C/S policy does not depend on the formula we are solving. For example, we cannot compute a path predicate, pass it to a solver (or to a simplifier) and then request concretization or symbolization depending on the solver’s output. Note, however, that the extended memory state \( \Sigma^+ \) does allow to overcome most of the above limitations, assuming one is willing to store (resp. to query) very complex information into (resp. from) \( \Sigma^+ \). In our view, \( \Sigma^+ \) should be used with care, only in last resort. Table 8: Encoding of C/S policies <table> <thead> <tr> <th>policy</th> <th>language</th> </tr> </thead> <tbody> <tr> <td>minimal concretization</td> <td>CS2</td> </tr> <tr> <td>recursive concretization</td> <td>extended</td> </tr> <tr> <td>atomic concretization</td> <td>extended ∩</td> </tr> <tr> <td>incorrect concretization</td> <td>extended decisions</td> </tr> <tr> <td>r/w full-concrete</td> <td>dart/cute</td> </tr> <tr> <td>r/w full-symbolic</td> <td>core language</td> </tr> <tr> <td>r/w domain restriction</td> <td>extended decisions</td> </tr> <tr> <td>r/w multi-level</td> <td>extended</td> </tr> <tr> <td>r/w taint-based</td> <td>extended ∩</td> </tr> <tr> <td>r/w dataflow-based</td> <td>extended decisions</td> </tr> </tbody> </table> 5. IMPLEMENTATION The C/S policy mechanism presented so far has been integrated into Binsec/se [19], an open-source DSE tool built on top of the Binsec framework [20] for binary code analysis. Binsec and Binsec/se are developed in OCaml. They rely on DBA [6] and use solvers Z3 [8] and Boolector [11]. An overview of the modified architecture is shown in Figure 5. The C/S policy is specified in a textual format close to CSml. Subsequently, while the SE engine creates the path predicate, the C/S policy is queried for each encountered expression (Section 4.1) via a hook function, instantiated from the CSml specification. Figure 5: CSml support in Binsec/se This version of Binsec/se is currently the first SE tool supporting high-level specification of a wide range of C/S policies. The core engine is fully functional: all experiments of Section 6 have been carried out with it. Concerning CSml, the whole core language is supported, as well as extended memory states (currently: taint and heap information) and extended decisions. Other features are in progress, especially memory injection. 6. EXPERIMENTS We report in this section three experiments that have been carried out with CSml and Binsec/se. - First, we are interested in studying the impact of C/S policies targeting memory reads and writes. While (the new) policy PP* works better on average, a generic C/S mechanism is still strongly recommended. This is the first time such a comparison is performed. • Second, we are interested in estimating the overhead of rule-based C/S specification. The conclusion is that our approach does impact the cost of formula creation, yet it is still negligible w.r.t. formula solving. Hence, our approach is practical. • Finally, we come back to the motivating example of Section 3.2, in order to argue on the benefit of defining specific purpose-oriented C/S policies. 6.1 Quantitative evaluation of C/S policies We study the impact on SE of C/S policies targeting memory reads and writes. This is a typical application of C/S, since a faithful modeling of memory operations may lead to hard-to-solve formulas. More precisely, we investigate the following questions: RQ 1: Do C/S policies have a significant impact on SE in terms of the quality of results? RQ 2: Is there a best C/S policy for read/write operations – among standard policies? Protocol. We consider 5 different C/S policies regarding the handling of memory reads and writes, namely: CC, CP, PC, PP*, PP, where the first letter indicates whether read addresses are concretized (C) or propagated (P), and the second letter indicates the same for write addresses. PP* is a special case: all read and write addresses are kept symbolic, except for stack registers (i.e., on x86, registers esp and ebp are concretized). While CC and PP are standard policies, the three others are new. Experiments are performed over 167 programs (x86 executable codes) – composed of programs from NIST/SAMATE[29] (a standard benchmark for program analysis), all Unix coreutils and several Windows malware [30], for a total of 45,242 solver queries. Details can be found in Table 9. All instructions are traced, except calls to library functions which are stubbed by symbolic values (fresh logical variables). The solver is Z3, with a time-out of 30 seconds. We measure the influence of these policies in the following way: for each benchmark program, we consider an arbitrary (but reproducible) initial concrete execution and we ask the SE engine to iteratively invert every condition along the initial execution, leading to a set of new path predicate computations and solver queries. We record for each policy the number of queries which have been successfully solved (SAT), proved infeasible (UNSAT) or which have triggered a time-out (TO). Note that only the first category leads to new test input (and hopefully) better code coverage. Results and conclusion. Part of results are summarized in Tables 10 and 11. First, [RQ 1] the choice of C/S policy may greatly affect the outcome of SE: there are ≥5x more SAT results on 20/167 examples, and up to 286x more SAT results on one program. Second, [RQ 2] there is no clear hierarchy between the considered policies. Indeed, even if PP* performs very well on many examples — PP* is the best policy on 41/167 examples, and it is optimal on 117/167 examples (Table 11), the global number of successfully solved instances is pretty close for all policies but PP (Table 10). Actually, while a more symbolic policy leads in theory to more satisfiable queries, it may also come at the price of harder-to-solve formulas and time-outs. These results are a strong argument in favor of a generic C/S mechanism. The major threats to validity are the representativeness of the experimental setting (policies, programs) and internal bugs in the SE tool. We mitigate these threats through using standard policies and variants of them as well as a large program set coming from three distinct well-known and publicly-available benchmarks. Moreover, we rely on publicly-available tools (SE, solver) and results have been crosschecked for internal validity. 6.2 Overhead of the rule-based language We evaluate the overhead of our parametric C/S policy mechanism. We want to answer the two following questions: RQ 3: What is the extra-cost of rule-based C/S specification, especially w.r.t. hard-coded policies (by mean of callback functions) and no C/S policy at all? [RQ 4] Is it affordable, i.e. is the extra-cost low w.r.t. solving time? Protocol. We reuse the experimental setting of the previous evaluation. We consider two metrics: the cost of formula creation – which is directly affected by C/S policies, and the ratio between formula creation and formula solving. We record these metrics for the 5 previous policies, implemented either in CSml or through native callbacks, and we consider a baseline consisting of SE without any C/S policy. Results and conclusion. Table 12 reports the ratio between formula creation and formula creation plus solving. Note that solving time does not depend on the way C/S is implemented. [RQ 3] CSml does lead to a more expensive path predicate computation (average: x3 w.r.t. hard-coded callbacks and up to x5 w.r.t. no C/S at all, at worst x7 on some examples), yet [RQ 4] the cost of predicate computation is still negligible (average of 1.45% for the most expensive C/S policy; maximum of 23% on some easy-to-solve path predicates) w.r.t. the cost of predicate solving. Hence, our rule-based C/S mechanism brings extra-flexibility at only a very slight extra-cost. **Threats to validity**: besides issues discussed in the previous evaluation, the considered policies are rather simple w.r.t. the expressive power of CSml. While the study is of interest because these policies are representative, further investigations are required for complex CSML policies. ### Table 12: Overhead evaluation <table> <thead> <tr> <th></th> <th>min</th> <th>max</th> <th>average</th> </tr> </thead> <tbody> <tr> <td>base</td> <td>(PP)</td> <td>0.04%</td> <td>3%</td> </tr> <tr> <td>rule-based</td> <td></td> <td></td> <td></td> </tr> <tr> <td>C/S policy</td> <td></td> <td></td> <td></td> </tr> <tr> <td>CC</td> <td>0.1%</td> <td>17%</td> <td>1.2%</td> </tr> <tr> <td>CP</td> <td>0.1%</td> <td>23.5%</td> <td>1.45%</td> </tr> <tr> <td>PC</td> <td>0.08%</td> <td>12.8%</td> <td>0.85%</td> </tr> <tr> <td>PP</td> <td>0.05%</td> <td>12.3%</td> <td>0.95%</td> </tr> <tr> <td>PP*</td> <td>0.05%</td> <td>4%</td> <td>0.48%</td> </tr> <tr> <td>hard-coded</td> <td></td> <td></td> <td></td> </tr> <tr> <td>C/S policy</td> <td></td> <td></td> <td></td> </tr> <tr> <td>CC</td> <td>0.05%</td> <td>8.5%</td> <td>0.5%</td> </tr> <tr> <td>CP</td> <td>0.05%</td> <td>8.2%</td> <td>0.5%</td> </tr> <tr> <td>PC</td> <td>0.05%</td> <td>8%</td> <td>0.45%</td> </tr> <tr> <td>PP</td> <td>0.05%</td> <td>6%</td> <td>0.45%</td> </tr> <tr> <td>PP</td> <td>0.04%</td> <td>3%</td> <td>0.3%</td> </tr> </tbody> </table> Ratio between the cost of path predicate computation (impacted by C/S) and the whole cost (i.e. formula creation + formula solving). Note that the time for formula solving does not depend on the way C/S is implemented (rules, hard-coded, no C/S). ### 6.3 Dedicated C/S policies We come back to the motivating example of Figure 3. In order to check that vulnerability at line 9 can be exploited, we follow the general line of [1, 23, 16] and strengthen the path predicate with the extra condition that at line 9, ptr must be equal to an arbitrarily-chosen value, here 0x61626364. Depending on C/S, our strengthened path predicate may or not be satisfiable. We consider CS4 and CS5 (Section 3.2), as well as the (original) following one: **CS6**: Memory writes are kept symbolic if both the destination address and the value to write are tainted. The encoding of CS4 is given in Table 2, those of CS5 and CS6 are straightforward from CS4 and Table 5. As expected, CS6 does allow to recover input values triggering the exploit and hijacking the execution control flow to address 0x61626364 (with Z3: x=0x20, y=0x20 and z=0x30b131b2), while CS4 does not reveal the exploit (the formula is too constrained), and CS5 obtain an exploit, but with a more complex formula (with an extra symbolic expression for &buf [x]). ### 7. RELATED WORK Several DSE frameworks have been developed so far, each of them offering its own solution to the C/S issue. We summarize hereafter its most representative solutions, and we compare them with our approach. **Built-in C/S policies.** Most SE tools implement a single hard-coded built-in C/S policy, which can favor either scalability (i.e., by considering most values as concrete) or completeness (i.e., by keeping more symbolic values). For instance, the pioneering tools DART [21] and CUTE [26] fall in the former category (memory addresses, results from external library calls and part of non-linear expressions are concretized), while PATHCRAWLER keeps the computation fully symbolic [28] and EXE [13] stands in between. More recent engines can even build on more sophisticated heuristics in the hope of reaching a sweet spot between scalability and completeness, typically based on tainting [25, 23] or dataflow analysis [16]. We showed in Section 4.5 how such policies can be specified in CSML. **More flexible policies.** Klee [12] is a popular symbolic execution engine operating on LLVM byte-code [31]. By default, each program variable is considered as concrete, unless specified otherwise. Source-level primitives 6 allows to indicate that a variable should be considered as “symbolic” from a given control location. Finally, symbolic values are (implicitly) made concrete when calling native external libraries, unless a model (i.e. a C stub) is provided. S2E [18, 17] allows to perform symbolic execution at system level, taking into account not only the target application but also its whole environment. Although partially based upon Klee, S2E offers several original features, especially: a powerful – but rather complex – plugin mechanism allowing the user to interact with the execution engine by inserting external code upon reception of some events, and the ability to switch between symbolic and concrete modes. In addition, S2E provides several ways to introduce symbolic values at arbitrary memory locations. **Comparison with our proposal.** Both S2E and Klee do allow the user to write and integrate C/S policies based on memory injection (cf. Section 4.4). They do not provide policies based on expression evaluation, while we argued in Section 4.4 that both notions are useful and orthogonal. Expressing a C/S policy with Klee is rather tedious and error-prone since such a policy should be explicitly and manually weaved into the code under test, S2E is more flexible, thanks to its plugin mechanism. These two approaches focus on practical implementation, while we are also interested in the formalization, specification (and ultimately) understanding of C/S policies. Especially, we propose a much more comprehensive and declarative way to define C/S policies. ### 8. CONCLUSION Concretization and Symbolizations (C/S) is a crucial part of modern SE tools, yet C/S is often treated as “black magic”, with only little documentation and hard-coded heuristics. We propose a clear separation of concerns between the specification of C/S policies on one side, through the CSML rule-based language, and the algorithmic core of SE on the other side, revisited to take C/S policies into account. CSML is simple, yet powerful enough to encode all popular C/S policies from the literature. Such a mechanism has been implemented on top of Binsec/SE, yielding the first SE tool supporting high-level specification of a wide range of C/S policies. We also carried out the first quantitative comparison on C/S policies, demonstrating that the level of genericity we offer is both very beneficial to SE and affordable (very low overhead). This work paves the way for a systematic study of C/S policies in order to better understand their impact and to identify interesting trade offs. --- 6Like klee_make_symbolic 9. REFERENCES
{"Source-Url": "https://hal.univ-grenoble-alpes.fr/hal-01721492/file/issta-2016.pdf", "len_cl100k_base": 13387, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 48447, "total-output-tokens": 15520, "length": "2e13", "weborganizer": {"__label__adult": 0.0003426074981689453, "__label__art_design": 0.00031280517578125, "__label__crime_law": 0.0003116130828857422, "__label__education_jobs": 0.00035643577575683594, "__label__entertainment": 5.8591365814208984e-05, "__label__fashion_beauty": 0.0001455545425415039, "__label__finance_business": 0.00015842914581298828, "__label__food_dining": 0.0003161430358886719, "__label__games": 0.0004901885986328125, "__label__hardware": 0.0008707046508789062, "__label__health": 0.0004055500030517578, "__label__history": 0.00020635128021240232, "__label__home_hobbies": 7.587671279907227e-05, "__label__industrial": 0.00033283233642578125, "__label__literature": 0.0002446174621582031, "__label__politics": 0.00024437904357910156, "__label__religion": 0.0004570484161376953, "__label__science_tech": 0.0160675048828125, "__label__social_life": 7.390975952148438e-05, "__label__software": 0.0048980712890625, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002636909484863281, "__label__transportation": 0.00044035911560058594, "__label__travel": 0.00018465518951416016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58040, 0.02931]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58040, 0.20871]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58040, 0.86081]], "google_gemma-3-12b-it_contains_pii": [[0, 1158, false], [1158, 5810, null], [5810, 12208, null], [12208, 18039, null], [18039, 21754, null], [21754, 27040, null], [27040, 31044, null], [31044, 37713, null], [37713, 42811, null], [42811, 47805, null], [47805, 54271, null], [54271, 58040, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1158, true], [1158, 5810, null], [5810, 12208, null], [12208, 18039, null], [18039, 21754, null], [21754, 27040, null], [27040, 31044, null], [31044, 37713, null], [37713, 42811, null], [42811, 47805, null], [47805, 54271, null], [54271, 58040, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58040, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58040, null]], "pdf_page_numbers": [[0, 1158, 1], [1158, 5810, 2], [5810, 12208, 3], [12208, 18039, 4], [18039, 21754, 5], [21754, 27040, 6], [27040, 31044, 7], [31044, 37713, 8], [37713, 42811, 9], [42811, 47805, 10], [47805, 54271, 11], [54271, 58040, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58040, 0.15027]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
bf41dc7b7a0cde2364f9c1301a1c9c3dd5a70016
RSP-QL*: Enabling Statement-Level Annotations in RDF Streams Robin Keskisärkkä1, Eva Blomqvist1,2, Leili Lind1,2, and Olaf Hartig1 1 Linköping University, Linköping, Sweden firstname.lastname@liu.se 2 RISE Research Institutes of Sweden AB / Division ICT SICS East, Linköping, Sweden firstname.lastname@ri.se Abstract. RSP-QL was developed by the W3C RDF Stream Processing (RSP) community group as a common way to express and query RDF streams. However, RSP-QL does not provide any way of annotating data on the statement level, for example, to express the uncertainty that is often associated with streaming information. Instead, the only way to provide such information has been to use RDF reification, which adds additional complexity to query processing, and is syntactically verbose. In this paper, we define an extension of RSP-QL, called RSP-QL*, that provides an intuitive way for supporting statement-level annotations in RSP. The approach leverages the concepts previously described for RDF* and SPARQL*. We illustrate the proposed approach based on a scenario from a research project in e-health. An open-source implementation of the proposal is provided and compared to the baseline approach of using RDF reification. The results show that this way of dealing with statement-level annotations offers advantages with respect to both data transfer bandwidth and query execution performance. Keywords: RSP-QL* · RDF* · RDF Stream Processing · e-health 1 Introduction Recent years have seen an increasing interest in processing and analyzing streaming information as it is generated by applications, services, sensors, and smart devices. RDF Stream Processing (RSP) leverages the principles of Linked Data and the Semantic Web to cope with heterogeneity in data, but employs strategies inspired from stream processing to cope with high velocity data streams. During the last decade, several RSP systems and models have been proposed, which have all provided their own syntax, semantics, and underlying assumptions about the nature of RDF streams [6,7]. The RSP community group3 was formed to define a common model for producing, transmitting and continuously querying RDF streams. The first version of this common query model (RSP-QL) was proposed by Dell’Aglio et al. in 2014 [7], and the draft of the abstract syntax and semantics was published by the RSP community group in 2016 [2]. Data generated by sensors is almost always coupled with provenance information, or a level of uncertainty representing, for instance, lack of precision or a knowledge gap. For example, all values reported by a temperature sensor may be associated with some error describing a probability distribution. The RDF specification provides a vocabulary that allows metadata to be represented about RDF triples using RDF reification [11]. In practice, however, this is not widely adopted as a standard for representing and managing such metadata on the Semantic Web [8]. RDF\(^*\) was recently proposed as a way to support a concise representation of statement-level metadata, while remaining backwards compatible with standard RDF [9,10]. By enclosing a triple using the strings ‘<<’ and ‘>>’, the extension allows it to be used in the subject or object position of other triples. This allows statement-level metadata to be provided directly. For example, the triple \(:bob :knows :alice\) could be annotated with the source \(\text{wikipedia}\) as follows: \(<< :bob :knows :alice >> :source :wikipedia\). Similarly, the authors’ propose SPARQL\(^*\) as an extension of SPARQL for querying RDF\(^*\) data, where SPARQL\(^*\) supports similar nesting of triple patterns. We propose an extension to RSP-QL that leverages RDF\(^*\)/SPARQL\(^*\) for annotating and querying streaming data. We show that the proposed approach has several benefits over RDF reification when it comes to statement-level annotations. The approach is motivated based on a use case from a current research project, where we attempt to detect abnormal situations in an e-health scenario. The rest of the paper is organized as follows. Section 2 briefly discusses the relevant related work, while Section 3 describes a use-case scenario that both motivates the proposed approach and exemplifies the requirements addressed by the proposal. Section 4 describes the proposed approach informally, and Sections 5–6 provide the necessary formal definitions, where Section 5 defines the data model and Section 6 defines the syntax and semantics of the proposed RSP-QL extension. Section 7 provides an application-based evaluation of the approach. Section 8 describes a prototype implementation and a performance evaluation of the implemented system. Section 9 discusses the impact of the presented work and Section 10 summarizes the main conclusions of the paper. 2 Related Work Over the past decade, there has been a growing interest in providing models and languages for combining the principles of the Semantic Web with streaming information. RDF Stream Processing (RSP) systems aim to provide extensions to RDF and SPARQL for representing and querying streaming data. However, though several RSP systems have emerged that provide extensions and operators for this purpose [1,3,4,13,18], they typically provide different languages, constructs, operators, and evaluation semantics [7]. The W3C RSP community group was formed to define a common model for representing and querying streaming RDF data. The proposed model and language, RSP-QL [7], can be used to model the behavior of most of the current RSP systems, and provides well-defined semantics for explaining query execution. However, none of the existing RSP approaches have given much attention to aspects related to representing metadata in streams, such as uncertainty or provenance. The RSP-QL stream model allows such annotations to be provided on the graph level, but annotations on the triple level are not supported. The term statement-level metadata refers to data that captures information about a single statement or fact. The RDF specification includes the notion of RDF reification that lets a set of RDF triples describe some other RDF triple [11]. The approach requires the inclusion of four additional RDF triples for every statement where metadata is to be provided. Another approach is to leverage named graphs, where the identifier of the graphs can be used to attach metadata to statements [12]. However, this approach has the disadvantage of inhibiting the application of named graphs for other uses. Finally, singleton properties have been proposed as an alternative approach, where a distinct property is provided for each triple to be annotated [15]. The singleton properties proposal introduces a large number of unique predicates, which is atypical for RDF data, and disadvantageous for common SPARQL optimization techniques [19]. Additionally, these approaches result in verbose queries [9]. For standard RDF, there therefore exists no convenient way of annotating data with metadata on the statement level [10]. The RDF*/SPARQL* approach was proposed as a way of supporting a more intuitive representation, by allowing triples in the subject and object positions of RDF statements [9,10]. In this paper, we propose to extend RSP-QL based on this approach. 3 Use-Case Scenario In this section, we describe a use-case scenario to exemplify the kinds of requirements that may be addressed by combining RSP-QL with RDF*/SPARQL*. The scenario originates from an ongoing research project, E-care@home⁴, in which the aim is to develop privacy-preserving AI-solutions for home care of elderly patients. In addition to developing technical solutions, the project has put great emphasis on studying the requirements of stakeholders. These requirements have been documented in a project deliverable [14]. As part of this deliverable, a number of personas and use-case scenarios were also developed, including the following description of a scenario involving the patient Rut who has advanced chronic obstructive pulmonary disease (COPD) and is multimorbid. “...The system can automatically sense abnormal situations, e.g. when certain health parameters deviate from the normal values, or when the overall situation as assessed by a multitude of sensors appears abnormal. When the system detects such situations, it sends out an alarm to a suitable recipient based on the severity of the deviation (e.g., emergency dispatch for a life-threatening deviation, the patient’s physician if no immediate action is required, or next-of-kin if suitable). [...] ⁴http://ecareathome.se/ sitting in the same position in a chair in the living room for an unusually long time given that there are no entertainment devices turned on at the moment. Her heart rate is above normal, but her breathing is slower than normal. Small motions indicate that she is not asleep, yet she is not moving much. Her oxygen levels are about normal. The system decides to classify this as a low-emergency abnormal state. The system also knows that Rut’s partner has left the house a few hours ago. It therefore sends an alert to him [...] the alert reaches Rut’s partner as he is already on his way home. He hurries home and opens the door only to find out that Rut is in good health and has been enjoying a paperback copy of the latest crime novel by a famous Swedish author for the past few hours.”[14] Like any health-care system, the one envisioned by E-care@home sets high requirements in terms of patient safety, system reliability, and transparency. To this end, all the data that the system uses to draw conclusions and to generate suggestions, or even to take action, must be accompanied by some assessed confidence. For instance, in the scenario above, to put patient safety first the system cannot afford to miss an abnormal and highly dangerous situation, but on the other hand it needs to be able to disregard observations that are not reliable. As an example, whenever a pulse oxymeter reports the oxygen saturation of a patient, the system also needs to know the confidence that the system can put in this value. The sensor may have a fixed confidence value, but the system may also derive an adjusted value that takes into account contextual factors of the measurement, such as the position of the sensor and the activity of the patient at measurement time. Regardless of how the confidence value is derived, it needs to be reported as part of the reported observation. 4 Overview of RSP-QL* The main difference between RSP and traditional RDF/SPARQL processing is that the former introduces a time dimension to processing [6]. The time dimension in RSP-QL is managed by allowing windows to define discrete subsets over RDF streams, and at any point in time, a window can be queried as a regular RDF dataset. The approach proposed in this paper extends RSP-QL in two funda- mental ways: RDF streams are extended to support RDF*, and the supported graph patterns in RSP-QL are extended to support those in SPARQL*. The ex- ample in Listing 1.1 shows an RSP-QL* query that illustrates the main features and language constructs. The registered query is evaluated every 10 seconds. It defines a time-based window with a width of 1 minute that slides every 10 seconds over the heart- rate stream. The query then matches the heart-rate value and confidence of each observation in the window using an RDF* pattern [9]. This is the only differ- ence between RSP-QL and RSP-QL* in this query. The results are then filtered based on a threshold, and the heart-rate value and timestamp of the matched observations are reported. There are conceptually no limitations on the com- plicity of the provided annotations, and they can, e.g., instead be represented as confidence intervals or distributions rather than single values. ```xml PREFIX ex: <http://www.example.org/ontology#> PREFIX sosa: <http://www.w3.org/ns/sosa/> REGISTER STREAM <heart-rate/alert> COMPUTED EVERY PT10S AS SELECT ?hr ?time FROM NAMED WINDOW <window/1> ON <http://stream/heart-rate> [RANGE PT1M STEP PT10S] WHERE { WINDOW <window/1> { GRAPH ?g { FILTER(?confidence > 0.9 && ?hr > 120) } ?g <generatedAt> ?time . } } ``` Listing 1.1: Example of an RSP-QL* query. 5 Data Model This section defines the concepts that capture the notion of streams considered by our approach. We begin with the basic notions of RDF and RDF*. As usual [5,16], we assume three pairwise disjoint, countably infinite sets \(\mathcal{I}\) (IRIs), \(\mathcal{B}\) (blank nodes), and \(\mathcal{L}\) (literals). Then, an RDF triple is a tuple \((s, p, o)\) with \(s, p \in \mathcal{I}\), and \(o \in \mathcal{I} \cup \mathcal{B} \cup \mathcal{L}\). Furthermore, a set of RDF triples is called an RDF graph. RDF* extends this notion of triples by allowing the subject or the object to be another triple [9]. This form of nesting of triples, which may be arbitrarily deep, allows for statements to capture metadata about other statements. Formally, an RDF* triple is defined recursively as follows [9]: (i) any RDF triple is an RDF* triple, and (ii) given two RDF* triples \(t\) and \(t’\), and the RDF terms \(s \in \mathcal{I} \cup \mathcal{B}\), \(p \in \mathcal{I}\), and \(o \in \mathcal{I} \cup \mathcal{B} \cup \mathcal{L}\), the tuples \((t, p, o)\), \((s, p, t)\), and \((t, p, t’)\) are RDF* triples. The concept of an RDF* dataset has been introduced to represent collections of RDF graphs [5]. We extend this concept to cover RDF* graphs. **Definition 1.** A named RDF* graph is a pair \((n, G^*)\) where \(n \in \mathcal{I} \cup \mathcal{B}\), which is called the graph name, and \(G^*\) is an RDF* graph. An RDF* dataset is a set \(D = \{G^*_0, (n_1, G^*_1), (n_2, G^*_2), ..., (n_i, G^*_i)\}\), where \(G^*_0\) is an RDF* graph, called the default graph of \(D\), and \((n_k, G^*_k)\) is a named RDF* graph for all \(k \in \{1, 2, ..., i\}\). While the RDF model is atemporal, the notion of an RDF stream has been introduced to capture the dynamic nature of streaming RDF data [7]. Along the same lines, we define an RDF* stream as a time-ordered sequence of elements that are captured by a specific form of RDF* datasets. **Definition 2.** Let \(p\) be an IRI that denotes a predicate to capture timestamps for named RDF* graphs. Then, an RDF* stream element \(E\) is an RDF* dataset that consists of a default graph \(G^*_0\) and exactly one named RDF* graph \((n, G^*)\) such that the default graph \(G^*_0\) contains one RDF triple of the form \((n, p, \tau)\), where \(\tau\) is a timestamp. To denote this timestamp \(\tau\) in \(E\) we write \(\tau(E)\). Definition 3. An RDF* stream $S$ is a potentially unbounded sequence of RDF* stream elements such that for every pair of such elements $E_i$ and $E_j$, where $E_i$ comes before $E_j$ (i.e., $S = (..., E_i, ..., E_j, ...)$), the following properties hold: 1. $\tau(E_i) \leq \tau(E_j)$, and 2. the names of the single named RDF* graph $(n_i, G^*_i)$ in $E_i$ and of the single named RDF* graph $(n_j, G^*_j)$ in $E_j$ are different (i.e., $n_i \neq n_j$). A named RDF* stream is a pair $(n, S)$ where $n \in \mathcal{I}$ and $S$ is an RDF* stream. We also need to define a notion of windows over such streams as a way of referencing discrete portions of potentially infinite data streams [7]. Definition 4. A window $W$ over an RDF* stream $S$ is a finite set of RDF* stream elements from $S$. In this paper, we focus explicitly on temporal window operators (other window operators, such as count-based windows, can be defined in a similar manner). To this end, we define a time-based window of an RDF* stream as a contiguous set of elements from the stream whose timestamp is in a given interval. Definition 5. Given a time interval $[l, u)$, the time-based window over an RDF* stream $S$ for $[l, u)$, denoted by $W(S, l, u)$, is a window over $S$ that is defined as follows: $W(S, l, u) = \{E \mid E \text{ is in } S \text{ and } l \leq \tau(E) < u\}$. Finally, we shall need a function that represents any window as an RDF* dataset. Informally, this dataset consists of all the named RDF* graphs of all RDF* stream elements within the window, and the default graph of this dataset is constructed from the default graphs in all these RDF* stream elements. Definition 6. Let $W = \{E_1, E_2, ..., E_n\}$ be a window over some RDF* stream. The dataset representation of $W$, denoted by $DS(W)$, is the RDF* dataset that is constructed as follows: - the default graph of $DS(W)$ is $G^*_d = \bigcup_{\{(n, G^*_d)\} \in W} G^*_d$, and - the set of named RDF* graphs in $DS(W)$ is $\{(n, G^*) \mid \{G^*_d(n, G^*)\} \in W\}$. 6 Syntax and Semantics of RSP-QL* This section defines RSP-QL*, which is an RDF*-aware extension of RSP-QL. RSP-QL, in turn, is an extension of SPARQL. Hence, our definitions in this section extend RSP-QL [7] along the lines of how SPARQL* extends SPARQL [9,10], and by also taking into account the abstract syntax and semantics draft of the W3C RSP community group [2]. For the SPARQL-specific constructs we adopt the algebraic SPARQL syntax introduced by Pérez et al. [16]. Due to space constraints, we limit ourselves to presenting only the core concepts of the language. 6.1 Syntax of RSP-QL* Queries RSP-QL is an extension of SPARQL [17], and the basic building block is a basic graph pattern (BGP), that is, a finite set of triple patterns. A triple pattern is a tuple \((s,p,o)\) where \(s\), \(p\), and \(o\) are variables or constants. Any triple pattern is a triple* pattern, and which are defined recursively as follows [9,10]: - any triple pattern is a triple* pattern, and - given two triple* patterns \(tp\) and \(tp'\), and \(s \in (\mathcal{I} \cup \mathcal{B} \cup \mathcal{V})\), \(p \in (\mathcal{I} \cup \mathcal{V})\), and \(o \in (\mathcal{I} \cup \mathcal{B} \cup \mathcal{L} \cup \mathcal{V})\), then \((tp,p,o)\), \((s,p,tp)\), and \((tp,p,tp')\) are triple* patterns. A finite set of triple* patterns is referred to as a BGP*. On top of BGPs, RSP-QL supports all the other forms of graph patterns that have been introduced for SPARQL, and RSP-QL adds a new form to match data within windows of streaming data. We define a corresponding notion of patterns for RSP-QL*, but for brevity we here focus only on the core constructs. **Definition 7.** An RSP-QL* pattern is defined recursively as follows: 1. Any BGP* is an RSP-QL* pattern. 2. If \(n \in (\mathcal{V} \cup \mathcal{I})\) and \(P\) is a RSP-QL* pattern, then (WINDOW \(n\) \(P\)) and (GRAPH \(n\) \(P\)) are RSP-QL* patterns. 3. If \(P_1\) and \(P_2\) are RSP-QL* patterns, then (\(P_1\) AND \(P_2\)), (\(P_1\) OPT \(P_2\)), and (\(P_1\) UNION \(P_2\)) are RSP-QL* patterns. We now have everything required to define RSP-QL* queries, which consist of an RSP-QL* pattern and window declarations that are associated with IRIs to serve as names for the corresponding windows in the query. **Definition 8.** A window declaration is a tuple \((uS, \alpha, \beta, \tau_0)\) where \(uS \in \mathcal{I}\) is an IRI (representing the name of a named RDF* stream), \(\alpha\) is a time duration (representing a window width), \(\beta\) is a time duration (representing a slide parameter), and \(\tau_0\) is a timestamp (representing a start time). We now have everything required to define RSP-QL* queries, which consist of an RSP-QL* pattern and window declarations that are associated with IRIs to serve as names for the corresponding windows in the query. **Definition 9.** An RSP-QL* query is a pair \((\omega, P)\) where \(\omega\) is a partial function that maps some IRIs in \(\mathcal{I}\) to a window declaration, respectively, and \(P\) is an RSP-QL* pattern such that for every sub-pattern (WINDOW \(n\) \(P'\)) in \(P\) it holds that if \(n \in \mathcal{I}\), then \(\omega\) is defined for \(n\), i.e., \(n \in \text{dom}(\omega)\). 6.2 Semantics of RSP-QL* Queries We now define the semantics of RSP-QL* queries, for which we have to introduce some concepts used to define the query semantics of SPARQL and of SPARQL*. The query semantics of SPARQL is based on the notion of solution mappings [16] that map query variables to blank nodes, IRIs, or literals. For SPARQL, this notion has been extended to also be able to map to RDF* triples. That is, a solution mapping is a partial function \( \eta : \mathcal{V} \to (\mathcal{T} \cup \mathcal{I} \cup \mathcal{B} \cup \mathcal{L}) \) where \( \mathcal{T} \) denotes the set of all RDF* triples [9,10]. The standard notions of compatibility, merging and application of solution mappings can then be adapted as follows. **Definition 10.** Two solution mappings \( \eta, \eta' \) are **compatible** if \( \eta(v) = \eta'(v) \) for every variable \( v \in \text{dom}(\eta) \cap \text{dom}(\eta') \). **Definition 11.** The **merge** of two compatible solution mappings \( \eta \) and \( \eta' \), denoted by \( \eta \cup \eta' \), is a solution mapping \( \eta'' \) with the following three properties: - \( \text{dom}(\eta'') = \text{dom}(\eta) \cup \text{dom}(\eta') \), - \( \eta''(v) = \eta(v) \) for all \( v \in \text{dom}(\eta) \), and - \( \eta''(v) = \eta'(v) \) for all \( v \in \text{dom}(\eta) \setminus \text{dom}(\eta') \). **Definition 12.** The **application** of a solution mapping \( \eta \) to an RSP-QL* pattern \( P \), denoted by \( \eta[P] \), is the RSP-QL* pattern obtained by replacing all variables in \( P \) according to \( \eta \). We now define the corresponding algebra operators \( \text{join}, \text{union}, \text{and left join} \). **Definition 13.** Let \( \Omega_1 \) and \( \Omega_2 \) be sets of solution mappings. - \( \Omega_1 \times \Omega_2 = \{ \eta_1 \cup \eta_2 \mid \eta_1 \in \Omega_1, \eta_2 \in \Omega_2, \eta \text{ and } \eta' \text{ are compatible} \} \) - \( \Omega_1 \cup \Omega_2 = \{ \eta \mid \eta \in \Omega_1 \text{ or } \eta \in \Omega_2 \} \) - \( \Omega_1 \Join \Omega_2 = (\Omega_1 \times \Omega_2) \cup \{ \eta \in \Omega_1 \mid \forall \eta' \in \Omega_2 : \eta \text{ and } \eta' \text{ are not compatible} \} \) Based on these algebra operators, RSP-QL* patterns are evaluated over a background dataset and a set of named windows at a given timestamp. **Definition 14.** Let \( W \) be a partial function that maps some IRIs in \( \mathcal{I} \) to a window over some RDF* stream, respectively, and \( P \) be an RSP-QL* pattern such that for every sub-pattern (WINDOW \( \mathcal{n} P' \)) in \( P \) with \( \mathcal{n} \in \mathcal{I} \), it holds that \( W \) is defined for \( n \), i.e., \( n \in \text{dom}(W) \). Furthermore, let \( D \) be an RDF* dataset, \( G \) be an RDF* graph, and \( \tau \) be a timestamp. Then, the **evaluation** of \( P \) over \( D \) and \( W \) at \( \tau \) with \( G \), denoted by \( [P]_G^{D,W,\tau} \), is defined recursively as follows: 1. If \( P \) is a triple* pattern \( \langle \rangle \), then \( [P]_G^{D,W,\tau} = \{ \eta \mid \text{dom}(\eta) = \text{var}(\langle \rangle) \} \) where \( \text{var}(\langle \rangle) \) denotes the set of variables occurring in \( \langle \rangle \). 2. If \( P \) is (GRAPH \( u \) \( P' \)), then \( [P]_G^{D,W,\tau} = \prod_{G'}^{D,W,\tau} \) where \( (u,G') \in D \) 3. If \( P \) is (GRAPH \( ?x \ P' \)), then \( [P]_G^{D,W,\tau} = \bigcup_{G' \in D}(\text{GRAPH} \ u \ P')^{D,W,\tau} \) where \( G' \) is the default graph of the RDF* dataset \( DS(W) \) 4. If \( P \) is (WINDOW \( \mathcal{n} P' \)), then \( [P]_G^{D,W,\tau} = \prod_{G'}^{D,W,\tau} \) where \( \mathcal{W} = W(u) \) 5. If \( P \) is (WINDOW \( ?x \ P' \)), then \( [P]_G^{D,W,\tau} = \bigcup_{G' \in \text{dom}(\mathcal{W})}(\text{WINDOW} \ u \ P')^{D,W,\tau} \) where \( G' \) is the default graph of the RDF* dataset \( DS(W) \) 6. If \( P \) is (P1 AND P2), then \( [P]_G^{D,W,\tau} = [P1]_G^{D,W,\tau} \times [P2]_G^{D,W,\tau} \) 7. If \( P \) is (P1 UNION P2), then \( [P]_G^{D,W,\tau} = [P1]_G^{D,W,\tau} \cup [P2]_G^{D,W,\tau} \) 8. If \( P \) is (P1 OPT P2), then \( [P]_G^{D,W,\tau} = [P1]_G^{D,W,\tau} \setminus [P2]_G^{D,W,\tau} \) It remains to define the semantics of RSP-QL* queries, which contain window declarations in addition to an RSP-QL* pattern (cf. Definition 9). **Definition 15.** Let $S$ be a finite set of named RDF* streams and $q = (\omega, P)$ be an RSP-QL* query such that for every IRI $u_S \in \text{dom}(\omega)$ there exists a named RDF* stream $(u_S, S) \in S$. Furthermore, let $D$ be an RDF* dataset and $\tau$ be a timestamp. The evaluation of $q$ over $D$ and $S$ at $\tau$, denoted by $[q]^{D,S,\tau}$, is defined as $[q]^{D,S,\tau} = [P]^{D,W,\tau}_G$ where $G$ is the default graph of $D$ and $W$ is a partial function such that $\text{dom}(W) = \text{dom}(\omega)$ and for every IRI $u \in \text{dom}(W)$, it holds that $W(u)$ is the time-based window $W(S,x - \alpha, x)$ with $(u_S, S) \in S$, $(u_S, \alpha, \beta, \tau_0) = \omega(u)$ and $x = \tau_0 + \alpha + \beta \times i$ for the greatest possible $i \in \mathbb{N}$ for which $x < \tau$. 7 Application-Based Evaluation In this section, we evaluate RSP-QL* based on the application use-case scenario introduced in Section 3. To this end, we make three assumptions: First, we assume that all parameters about the patient are provided in separate streams. Second, the thresholds for the physiological parameters are context dependent, and we assume that the background data contains information about Rut’s expected values with respect to some activity. Third, we assume that all physiological parameters are reported with a confidence value representing some inherent uncertainty of the sample. Listing 1.2 illustrates a typical query for the application scenario. For the sake of readability, we have simplified the query slightly compared to the actual project application. Additional optimization strategies would also be employed in practice to provide improved scalability. The inputs to the query are 5 different streams that report data about the patient’s current heart rate, breathing rate, oxygen saturation, location (of both Rut and Rut’s partner), and current activity, respectively. The activity stream might have been created by another reasoning mechanism in the system, which infers activities of daily life based on sensor inputs and the context. For each window, the values are filtered for specific values or a confidence threshold, and then the aggregated data is checked against the threshold values specific to the current context of the patient (e.g., including the type of activity). If these conditions are met, we consider it a low-emergency situation, as described in the scenario outlined in Section 3. The resulting event is pushed to another stream upon which the system can act appropriately. In our use-case scenario, the system would first contact Rut’s partner. Similar queries could be set up to deal with other situations that the system should be able to detect. The application of RSP-QL* to this project use case shows that it is possible to express the queries needed, and that the proposed language thereby fulfills our use-case based requirements. In particular, it is worth noting the compactness and relative readability of the query in Listing 1.2, as compared to the corresponding RDF reification query\(^5\) (excluded to space constraints). \(^5\)https://github.com/keski/RSPQLStarEngine/tree/master/publications/semantics2019 Listing 1.2: The RSP-QL⋆ query used in the use-case evaluation. 8 Performance Evaluation In this section, we begin by briefly describing a prototype implementation of the proposed approach. We then report on the effects of the proposed RDF stream model with respect to data bandwidth, and compare it with a baseline approach of using RDF reification. Finally, we compare the query execution performance of the prototype when using RDF as opposed to RDF reification, while varying the number of annotated triples per streamed element. All experiments were run on a MacBook Pro with 16 GB 1600 MHz DDR3 memory, and a 2.8 GHz Intel Core i7. The experiments were run using Java 1.8.0 with 2048 MB allocated for the JVM. All experiments were preceded by warm-up runs and averages for execution times were collected only after memory usage had stabilized. 8.1 Prototype implementation We implemented the prototype using Apache Jena\(^6\) and RDFstarTools\(^7\), where the latter provides a collection of Java libraries for processing RDF\(^\star\) data and SPARQL\(^\star\) queries. Additionally, we implemented a separate RSP-QL\(^\star\) query parser and integrated it with the standard Jena architecture, along with an extension of Jena’s query class to support the additional syntax elements defined in RSP-QL\(^\star\). For the query execution, the implementation provides an extension of Jena’s query engine and query execution, supporting the new query operators. During query execution, all windows over streams are materialized as individual RDF\(^\star\) datasets. The execution’s active dataset then changes as needed when a window operation is evaluated. To improve evaluation efficiency, all parsed nodes are encoded as integers in one of two dictionaries: the node dictionary or the reference dictionary. Regular RDF nodes are added to the node dictionary, while triple nodes are added to the reference dictionary, which (recursively) encodes each separate node of the triple. All nodes, regardless of type, are internally represented as an integer, where the most significant bit signals whether the ID represents a regular node or a reference triple. This allows the system to quickly check how a node should be decoded. Encoding and decoding iterators are provided to support moving between ID-based iterators, and Jena’s standard iterator implementations. The prototype is provided as open-source\(^8\) under the MIT License. The underlying data structures can easily be changed by providing alternative implementations for the corresponding interfaces. 8.2 Serialization Overhead One of the side-effects of using RDF reification to annotate triples is that it increases the size of the dataset, since for each reification triple four additional --- \(^6\)https://jena.apache.org/ (version 3.8.0) \(^7\)https://github.com/RDFstar/RDFstarTools (version from 2019-02-28) \(^8\)https://github.com/keski/RSPQLStarEngine triples have to be added. Thus, one of the benefits of the proposed extension is the reduced overhead involved in transferring statement-level annotations in data streams. To compare the impact on bandwidth requirements, we compared the overhead in terms of bytes for each of the two approaches. The data was serialized using TriG*, which is an extension of Turtle* [9] for supporting named graphs, and compressed. The amount of metadata per annotated triple impacts the relative overhead of the two approaches. For this evaluation, the TriG* serialization of each RDF* stream element contains declarations of one prefix, a base IRI, and a single metadata statement per annotated triple. Fig. 1 shows the bandwidth required by the approaches as a function of the number of annotated triples per streamed element. The results show that the amount of bytes required when using RDF* is around half of what is required when using RDF reification. 8.3 Query Execution Performance The performance of the approach was evaluated on the prototype implementation. The streamed elements contained a single confidence annotated triple, where the number of additional triples annotated with some other metadata predicate varied between experiments runs. A single evaluation query was used to match and filter all triples annotated with the confidence value. We compared query execution times when representing the metadata using RDF* and querying it using RSP-QL* versus representing the metadata using RDF reification and querying it using pure reification-based RSP-QL queries. The prototype applies no specific optimization techniques for the queries; thus, the two approaches differ only with respect to how statement-level metadata is represented internally. The RDF reification approach simply uses regular triple-pattern matching, whereas the RDF* approach represents the annotated triples as resources on the physical level. For the RDF reification query, we provided an additional version of the query optimized based on the heuristics described by Tsialiamanis et al. [19], where the order of the matched triple patterns was determined based on selectivity. Fig. 2 presents the average query execution times. The results show that the advantage of the proposed approach grows with the number of distinct triples annotated in each streamed element, but that this difference can potentially be reduced by applying established optimization heuristics. 9 Discussion The proposed approach provides a compact and intuitive way for both representing and querying annotated triples. Other approaches that could be considered for this purpose include single-triple named graphs [12], singleton properties [15], and RDF reification [11], but these approaches come with various drawbacks. Compression here included the removal of excessive whitespace characters, the use of prefixes, and the use of predicate lists where appropriate. The application of named graphs inhibits the use of the graph name for other purposes, which means it is not compatible with the structure of RDF stream elements. Singleton properties introduce large numbers of unique predicates, which can adversely affect query execution performance. RDF reification, on the other hand, is both part of the RDF standard and can be supported in RSP-QL. However, RDF reification is verbose, both with respect to representing and querying data. We note that RDF* and SPARQL* may be understood simply as syntactic sugar on top of RDF and SPARQL [9], and by extension this applies to the approach presented in this paper. However, the evaluation of the prototype implementation illustrates that representing annotated triples as resources on the physical level can have positive effects on the query execution level. When matching a single RDF reification triple, a total of four additional triple patterns have to be evaluated. In fact, due to this inefficiency, many RDF stores implement specific strategies for representing annotated triples [8]. For example, Virtuoso\textsuperscript{10} encodes RDF reification statements as quads, Apache Jena\textsuperscript{11} provides an implementation of a node type with direct access to the statement it reifies, and Blazegraph\textsuperscript{12} uses an approach similar to the one implemented in our prototype. RDF* and SPARQL*, and thus RSP-QL*, simplifies the representation of complex scenarios, both from the perspective of modeling and of querying annotated metadata. For example, we may want to treat an RDF statement differently depending on whether the uncertainty associated with it has been automatically \textsuperscript{10}https://virtuoso.openlinksw.com/ \textsuperscript{11}https://jena.apache.org/ \textsuperscript{12}https://wiki.blazegraph.com/ generated by a sensor, or if it originates from a physician. Querying this using RSP-QL⋆ simply involves having a triple⋆ pattern with two layers of nesting. As part of future work, we plan on relaxing some of the assumptions made in the semantics, and add support for additional features defined in RSP-QL, such as count-based window operators and output stream operators. 10 Conclusion In this paper, we have presented a novel way of annotating and querying statement-level metadata in RDF Stream Processing (RSP), and formally defined the new continuous query language RSP-QL⋆. The approach extends RDF streams to allow triples to directly use other triples in the subject and object positions, and similarly extends the current version of RSP-QL to query these, by leveraging and building on the concepts previously proposed for RDF⋆ and SPARQL⋆ [9,10]. The proposed approach was applied in a use case from an e-health research project, where multiple data streams have to be queried in parallel, and over extended periods of time, to detect possibly abnormal situations. The results show that RSP-QL⋆ meets all our use-case requirements, and provides a compact and intuitive way of expressing and querying statement-level metadata, compared with the baseline approach of using RDF reification. Furthermore, the prototype implementation presented in the paper, which is provided as open-source, demonstrates benefits over the baseline approach, both with respect to the bandwidth required for data transfer and with respect to query execution performance over statement-level annotations. RDF⋆ is a syntactically more compact way to express metadata annotations, and our experiments show that this difference is large enough to have an impact in deployed real-world systems and applications, where bandwidth may be limited. Although our prototype implementation is not optimized for query performance, we were able to demonstrate that the approach was faster with respect to query execution performance, when compared to using standard RDF reification. This is the first work on RSP that has focused on supporting annotations on the statement level. We believe that the proposed approach provides an intuitive and compact way for representing and querying statement-level metadata, and that this work provides a good foundation for future research on efficient management of, e.g., uncertainty and provenance, in RDF data streams. Acknowledgements. This research was supported by E-care@home, a “SIDUS – Strong Distributed Research Environment” project, funded by the Swedish Knowledge Foundation (KK-stiftelsen, Dnr: 20140217). Project website: http://ecareathome.se/. Olaf Hartig’s work on this paper has been funded by the CENIIT program at Linköping University (project no. 17.05). References
{"Source-Url": "http://olafhartig.de/files/KeskisarkkaEtAl_RSPQLstar_Semantics2019_Preprint.pdf", "len_cl100k_base": 9371, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 46683, "total-output-tokens": 11541, "length": "2e13", "weborganizer": {"__label__adult": 0.0005164146423339844, "__label__art_design": 0.0006461143493652344, "__label__crime_law": 0.0006875991821289062, "__label__education_jobs": 0.0018138885498046875, "__label__entertainment": 0.0001995563507080078, "__label__fashion_beauty": 0.0003426074981689453, "__label__finance_business": 0.0006685256958007812, "__label__food_dining": 0.0006375312805175781, "__label__games": 0.0009546279907226562, "__label__hardware": 0.001102447509765625, "__label__health": 0.0056610107421875, "__label__history": 0.0005240440368652344, "__label__home_hobbies": 0.0001646280288696289, "__label__industrial": 0.0005931854248046875, "__label__literature": 0.000950336456298828, "__label__politics": 0.0004673004150390625, "__label__religion": 0.0007734298706054688, "__label__science_tech": 0.38671875, "__label__social_life": 0.00021457672119140625, "__label__software": 0.04083251953125, "__label__software_dev": 0.55419921875, "__label__sports_fitness": 0.0004496574401855469, "__label__transportation": 0.0006427764892578125, "__label__travel": 0.00030994415283203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41578, 0.02128]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41578, 0.40821]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41578, 0.87055]], "google_gemma-3-12b-it_contains_pii": [[0, 2248, false], [2248, 5517, null], [5517, 8621, null], [8621, 11847, null], [11847, 14766, null], [14766, 17375, null], [17375, 20220, null], [20220, 24230, null], [24230, 27571, null], [27571, 27635, null], [27635, 30511, null], [30511, 33437, null], [33437, 35281, null], [35281, 38077, null], [38077, 41578, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2248, true], [2248, 5517, null], [5517, 8621, null], [8621, 11847, null], [11847, 14766, null], [14766, 17375, null], [17375, 20220, null], [20220, 24230, null], [24230, 27571, null], [27571, 27635, null], [27635, 30511, null], [30511, 33437, null], [33437, 35281, null], [35281, 38077, null], [38077, 41578, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41578, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41578, null]], "pdf_page_numbers": [[0, 2248, 1], [2248, 5517, 2], [5517, 8621, 3], [8621, 11847, 4], [11847, 14766, 5], [14766, 17375, 6], [17375, 20220, 7], [20220, 24230, 8], [24230, 27571, 9], [27571, 27635, 10], [27635, 30511, 11], [30511, 33437, 12], [33437, 35281, 13], [35281, 38077, 14], [38077, 41578, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41578, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
15f708f2e829814ed48b5dfcbaab0b541a49a18a
[REMOVED]
{"Source-Url": "http://luisgalarraga.de/docs/pac.pdf", "len_cl100k_base": 11173, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 51964, "total-output-tokens": 13885, "length": "2e13", "weborganizer": {"__label__adult": 0.00044918060302734375, "__label__art_design": 0.0005993843078613281, "__label__crime_law": 0.0008292198181152344, "__label__education_jobs": 0.00225830078125, "__label__entertainment": 0.00022530555725097656, "__label__fashion_beauty": 0.0002961158752441406, "__label__finance_business": 0.001068115234375, "__label__food_dining": 0.0005002021789550781, "__label__games": 0.0009431838989257812, "__label__hardware": 0.000934123992919922, "__label__health": 0.0008397102355957031, "__label__history": 0.0007991790771484375, "__label__home_hobbies": 0.0001404285430908203, "__label__industrial": 0.000690460205078125, "__label__literature": 0.00081634521484375, "__label__politics": 0.0005741119384765625, "__label__religion": 0.000614166259765625, "__label__science_tech": 0.381103515625, "__label__social_life": 0.0002366304397583008, "__label__software": 0.058563232421875, "__label__software_dev": 0.5458984375, "__label__sports_fitness": 0.0002903938293457031, "__label__transportation": 0.0007157325744628906, "__label__travel": 0.00033473968505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49932, 0.03982]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49932, 0.41674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49932, 0.83704]], "google_gemma-3-12b-it_contains_pii": [[0, 3139, false], [3139, 6636, null], [6636, 9724, null], [9724, 13267, null], [13267, 16787, null], [16787, 18801, null], [18801, 22367, null], [22367, 25940, null], [25940, 28801, null], [28801, 32367, null], [32367, 35354, null], [35354, 37873, null], [37873, 39357, null], [39357, 42737, null], [42737, 46079, null], [46079, 49932, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3139, true], [3139, 6636, null], [6636, 9724, null], [9724, 13267, null], [13267, 16787, null], [16787, 18801, null], [18801, 22367, null], [22367, 25940, null], [25940, 28801, null], [28801, 32367, null], [32367, 35354, null], [35354, 37873, null], [37873, 39357, null], [39357, 42737, null], [42737, 46079, null], [46079, 49932, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49932, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49932, null]], "pdf_page_numbers": [[0, 3139, 1], [3139, 6636, 2], [6636, 9724, 3], [9724, 13267, 4], [13267, 16787, 5], [16787, 18801, 6], [18801, 22367, 7], [22367, 25940, 8], [25940, 28801, 9], [28801, 32367, 10], [32367, 35354, 11], [35354, 37873, 12], [37873, 39357, 13], [39357, 42737, 14], [42737, 46079, 15], [46079, 49932, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49932, 0.06593]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
55edf7393d39c49a1266d8c28278a8fd4492d96c
On the reaction to deprecation of 25,357 clients of 4+1 popular Java APIs Anand Ashok Sawant Delft University of Technology Delft, The Netherlands A.A.Sawant@tudelft.nl Romain Robbes PLEIAD @ DCC University of Chile, Chile rrobbes@dcc.uchile.cl Alberto Bacchelli Delft University of Technology Delft, The Netherlands A.Bacchelli@tudelft.nl Abstract—Application Programming Interfaces (APIs) are a tremendous resource—that is, when they are stable. Several studies have shown that this is unfortunately not the case. Of those, a large-scale study of API changes in the Pharo Smalltalk ecosystem documented several findings about API deprecations and their impact on API clients. We conduct a partial replication of this study, considering more than 25,000 clients of five popular Java APIs on GitHub. This work addresses several shortcomings of the previous study, namely: a study of several distinct API clients in a popular, statically-typed language, with more accurate version information. We compare and contrast our findings with the previous study and highlight new ones, particularly on the API client update practices and the startling similarities between reaction behavior in Smalltalk and Java. I. INTRODUCTION An Application Programming Interface (API) is a definition of functionalities provided by a library or framework that is made available to an application developer. APIs promote the reuse of existing software systems [1]. In his landmark essay “No Silver Bullet” [2], Brooks argued that reuse of existing software was one of the most promising attacks on the essence of the complexity of programming: “The most radical possible solution for constructing software is not to construct it at all.” Revisiting the essay three decades later [3], Brooks found that indeed, reuse continues to be the most promising attack on essential complexity. APIs enable this: To cite a single example, we found at least 15,000 users of the Spring API. However, reuse comes with the cost of dependency on other components. This is not an issue when said components are stable. But evidence shows that APIs are not always stable: The Java standard API for instance has an extensive deprecated API 1. API developers often deprecate features and replace them with new ones, and over time remove these deprecated features. These changes can break the client’s code. Studies such as Dig and Johnson’s [4] found that API changes breaking client code are common. The usage of a deprecated feature can be potentially harmful. Features may be marked as deprecated because they are not thread safe, there is a security flaw, or will be replaced by a superior feature. The inherent danger of using a feature that has been marked as obsolete is good enough motivation for developers to transition to the replacement feature as suggested by the API developers themselves. Besides the above dangers of using deprecated features, they also lead to reduced code quality, and therefore to increased maintenance costs. With deprecation being a maintenance issue, we would like to see if API clients actually react to deprecated features of an API. To our knowledge, Robbes et al. conducted the largest study of the impact of deprecation on API clients [5], investigating deprecated methods in the Squeak and Pharo software ecosystems. This study mined more than 2,600 Smalltalk projects hosted on the SqueakSource platform. Based on the information gathered they looked at whether the popularity of deprecated methods either increased, decreased or remained as is after their deprecation. The Smalltalk study found that API changes caused by deprecation can have a major impact on the ecosystem. However, a small percentage of the projects actually reacts to an API deprecation. Out of the projects that do react, most of them systematically replace the calls to deprecated features with those that are recommended by API developers. Surprisingly, this was done despite the fact that API developers in Smalltalk do not appear to be documenting their changes as well as can be expected. The main limitation of this study is being focused on a niche programming community i.e., Pharo. This resulted in a small dataset with information from only 2,600 projects in the entire ecosystem. Additionally, with Smalltalk being a dynamically typed language, the authors had to rely on heuristics to identify the reaction to deprecated API features. We conduct a non-exact replication [6] of the previous Smalltalk [5] study, also striving to overcome its limitations. We study the reactions of more than 25,000 clients of 5 different APIs, using the statically-typed Java language; we also collect accurate API version information. Our results confirm that only a small fraction of clients react to deprecation, also in the Java ecosystem. Out of those, systematic reactions are rare and most clients prefer to delete the call made to the deprecated entity as opposed to replacing it with the suggested alternative one. This happens despite the carefully crafted documentation accompanying most deprecated entities. 1 See http://docs.oracle.com/javase/8/docs/api/deprecated-list.html II. METHODOLOGY We define the research questions and describe our research method contrasting it with the study we partially replicate [5]. A. Research Questions In line with our partial replication target, we try to keep as much as possible the same research questions as the original work. Given our additional information, we add one novel research questions (RQ0) and alter the order and partially the methodology we use to answer the research questions; this leads to some differences in the formulation. The research questions we investigate are: RQ0: What API versions do clients use? RQ1: How does API method deprecation affect clients? RQ2: What is the scale of reaction in affected clients? RQ3: What proportion of deprecations does affect clients? RQ4: What is the time-frame of reaction in affected clients? RQ5: Do affected clients react similarly? B. Research Method, Contrasted With Previous Study Robbes et al. analyzed projects hosted on the SqueakSource platform, that used the Monticello versioning system. The dataset contained 7 years of evolution of more than 2,600 systems, which collectively had over 3,000 contributors. They identified 577 deprecated methods and 186 deprecated classes in this dataset. If its results were very informative, this previous study had several shortcoming that this follow-up study addresses. We describe the methodology for collecting the data for this study by describing it at increasingly finer granularity: Starting from the selection of the subject systems, to detecting the use of versions, methods, and deprecations. Subject systems. The original study was based on a rather specific dataset, the Squeak and Pharo ecosystems found on SqueakSource. Due to this, the set of systems that were investigated in the previous study was relatively small. To overcome this limitation, we focus on a mainstream ecosystem: Java projects hosted on the social coding platform GitHub. Java is the most popular programming language according to various rankings [7], [8] and GitHub is the most popular and largest hosting service [9]. Our criteria for selection included popularity, reliability, and variety: We measure popularity in terms of number of clients in GitHub, length of history, and overall reputation (e.g., in Stack Overflow); we ensure reliability by picking APIs that are regularly developed and maintained; and we select APIs pertaining to different domains. These criteria ensure that the APIs result in a representative evolution history, do not introduce confounding factors due to poor management, and do not limit the types of clients that use them. We limit our study to Java projects that use the Maven build system, because Maven based projects use Project Object Model (POM) files to specify and manage the API dependencies that the project refers to. We searched for POM files in the master branch of Java projects and found approximately 42,000 Maven based ones on GitHub. By parsing their POM files, we were able to obtain all the APIs that they depend on. We then created a ranking of the most popular APIs, which we used to guide our choice of APIs to investigate. This selection step results in the choice of 5 APIs hosted on GitHub, namely: EasyMock [10], Guava [11], Guice [12], Hibernate [13], and Spring [14].2 The first 6 columns of Table I provide additional information on these APIs. Subsequently, we select the main subjects of this study: The clients of APIs introducing deprecated methods. Using the aforementioned analysis of the POM files, we have the list of all possible clients. We refine it using the GHTorrent dataset [15], to select only active projects. We also remove clients that had not been actively maintained in the 6 months preceding our data collection, to eliminate ‘dead’ or ‘stagnating’ projects. We totaled 25,357 projects that refer to one or more of 5 aforementioned popular APIs. The seventh column in Table I provides an overview of the clients selected, by API. API version usage. Explicit library dependencies are rarely mentioned in Smalltalk and there are several ways to specify them, often programmatically and not declaratively; also, Smalltalk does not use import statements as Java does. Thus, it is hard to detect dependencies between projects (heuristics are needed [16]) and to analyze the impact of deprecated methods on client. In contrast, Maven projects specify their dependencies explicitly and declaratively: We can determine the API version a project depends on, hence answer more questions, such as if projects freeze or upgrade their dependencies. In particular, we only consider projects that encode specific versions of APIs, or unspecified versions (which are resolved to the latest API version at that date). We do not consider ranges of versions, however very few projects use those (84 for all 5 APIs, while we include 25,357 API dependencies to these 5 APIs). In addition, few projects use unspecified API versions (269 of the 25,357, which we do include). Fine-grained method/annotation usage. Due to the lack of explicit type information in Smalltalk, there is no way of actually knowing if a specific class is referenced and whether the method invocation found is actually from that referenced class. This does not present an issue when it comes to method invocations on methods that have unique names in the ecosystem. However, in the case of methods that have common names such as toString or name or item, this can lead to some imprecise results. In the previous study, Robbes et al. resorted to manual analysis of the reactions to an API change, but had to discard cases which were too noisy. In this study, Java’s static type system addresses this issue without the need for a tedious, and conservative manual analysis. On the other hand, Java APIs can be used in various manners. In Guava, actual method invocations are made on object instances of the Guava API classes, as one would expect. However in Guice, clients use annotations to invoke API functionality, resulting in a radically different interaction model. These API usage variabilities must be considered. --- 2We do not mine the JDK itself, because to identify the JDK version required by a client one needs to rely on the client using the Maven compiler plugin. Yet, this plugin is rarely used, since it is mainly used to specify a JDK version other than the default one used by the client. While mining for API usage we have to ensure that we connect a method invocation or annotation usage to the parent class to which it belongs. There are multiple approaches that can be taken to mining the usage data from source code. The first uses pattern matching to match a method name and the import in a Java file to find what API a certain method invocation belongs to. The second uses the tool PPA [17] which can work on partial programs and find the usage of a certain method of an API. The third builds the code of a client project and then parse the bytecode to find type-resolved invocations. Finally, the fourth uses the Eclipse JDT AST parser to mine type-resolved invocations from a source code file. We created a method, fineGRAPE, based on the last approach [18], [19] that meets the following requirements: (1) fine-GRAPE handles the large-scale data in GitHub, (2) it does not depend on building the client code, (3) it results in a type-checked API usage dataset, (4) it collects explicit version usage information, and (5) it processes the whole history of each client. **Detect deprecation.** In Smalltalk, users insert a call to a deprecation method in the body of the deprecated method. This call often indicates which feature replaces the deprecated call. However, there is no IDE support. The IDE does not indicate to developers that the feature being used is deprecated. Instead, calls to deprecated methods output runtime warnings. In contrast Java provides two different mechanisms to mark a feature as deprecated. The first is the @deprecated annotation provided in the Javadoc specification. This annotation is generally used to mark an artifact as deprecated in the documentation of the code. This feature is present in Java since JDK version 1.1. Since this annotation is purely for documentation purposes, there is no provision for it to be used in compiler level warnings. This is reflected in the Java Language Specification (JLS). However, the standard Sun JDK compiler does issue a warning to a developer when it encounters the usage of an artifact that has been marked as deprecated using this mechanism. More recently, JDK 1.5 introduced a second mechanism to mark an artifact as deprecated with a source code annotation called @Deprecation (The same JDK introduced the use of source code annotations). This annotation is a compiler directive to define that an artifact is deprecated. This feature is part of the Java Language Specification; as such any Java compiler supports it. It is now common practice to use both annotations when marking a certain feature as deprecated. The first is used so that developers can indicate in the Javadoc the reasons behind the deprecation of the artifact and the suggested replacement. The other is now the standard way in which Java marks features as deprecated. To identify the deprecated features we first download the different versions of the APIs used by the clients from the Maven central server. These APIs are in the form of Java Archive (JAR) files, containing the compiled classes of the API source. We use the ASM [20] class file parsing library to parse all the classes and their respective methods. Whenever an instance of the @Deprecated annotation is found we mark the entity it refers to as deprecated and store this in our database. Since our approach only detects compiler annotations, we do not handle the Javadoc tag. See the threats to validity section for a discussion of this. We also do not handle methods that were removed from the API without warning, as these are out of scope of this study. ### III. Results In this section we answer the research questions we detailed in Section II-A. Figure 1 exemplifies the behavior of an API and its clients, when possible we refer to it to explain the methodology behind the answer to each research question. <table> <thead> <tr> <th>API (GitHub repo)</th> <th>Description</th> <th>Inception</th> <th>Releases</th> <th>Unique entities</th> <th>Number of clients</th> <th>Usage across history</th> </tr> </thead> <tbody> <tr> <td>EasyMock (easymock)</td> <td>A testing framework that allows for the mocking of Java objects during testing.</td> <td>Feb 06</td> <td>14</td> <td>102</td> <td>623</td> <td>649</td> </tr> <tr> <td>Guava (google/guava)</td> <td>A collections API that provides data structures that are an extension to the datastructures already present in the Java SDK. Examples of these new datastructures includes: multimaps, multisets and bitmaps.</td> <td>Apr 10</td> <td>18</td> <td>2,310</td> <td>14,828</td> <td>3,013</td> </tr> <tr> <td>Guice (google/guice)</td> <td>A dependency injection library created by Google.</td> <td>Jun 07</td> <td>8</td> <td>319</td> <td>1,999</td> <td>654</td> </tr> <tr> <td>Hibernate (hibernate/hibernate-orm)</td> <td>A framework for mapping an object oriented domain to a relational database domain. We focus on the core and entitymanager projects under the hibernate banner.</td> <td>Nov 08</td> <td>77</td> <td>2,037</td> <td>11,625</td> <td>6,038</td> </tr> <tr> <td>Spring (spring-projects/spring-framework)</td> <td>A framework that provides an Inversion of Control(IoC) container, which allows developers to access Java objects with the help of reflection. We choose to focus on just the spring-core, spring-context and spring-test modules due to their popularity.</td> <td>Feb 07</td> <td>40</td> <td>5,376</td> <td>41,948</td> <td>15,003</td> </tr> </tbody> </table> | 402 | 3.2.0 2.5.2 3.1.0 3.0.0 11 13 14 25% 50% 100% Easymock Guava Guice Hibernate Spring 1st most pop. release 2nd most pop. release Latest API release Other releases Figure 2. Popularity breakdown of versions, by API RQ0: What API versions do clients use? Our first research question seeks to investigate popularity of API versions and to understand the version change behavior of the clients. This sets the ground for the following answers. We start considering all the available versions of each API and measure the popularity in terms of how many clients were actually using it at the time of our data collection. In the example in Figure 1, we would count popularity as 1 for v7, 2 for v6, and 1 for v4. The column ‘number of clients’ in Table I specifies the absolute number of clients per each API and Figure 2 reports the version popularity results, by API. The general trend shows that a large number of different versions of the APIs are used and the existence of a significant fragmentation between the versions (especially in the case of Hibernate, where the top three versions are used by less than 25% of the clients). Further, the general trend is that older versions of the APIs are more popular. This initial results hint at the fact that clients have, to say the least, a delayed upgrading behavior, which could be related with how they deal with maintenance and deprecated methods. For this reason, we analyze whether the clients ever updated their dependencies or if they “froze” their dependencies—that is, if they never updated their API version. In the example in Figure 1, we count three clients who upgraded version in their history. If projects update we measure how long they took to do so (time between the release of the new version of the API in Maven central, and when the project’s POM file is updated). Table II summarizes the results. The vast majority of the clients we consider freeze to one single version of the API they use. Further, we see that this holds for all the APIs, except for Spring, whose clients have at least one update in 74% of the cases. In terms of time to update, interestingly, the median is lower for clients of APIs that have more clients that update, such as Hibernate and Spring. In general, update time varies considerably—we will come back to this in RQ3. **RQ1: How does API method deprecation affect clients?** In RQ0 we showed that most clients do not adopt new API versions. We now focus on the clients that use deprecated methods and on whether and how they react to deprecation. **Affected by deprecation.** From the data, we classify clients into 4 categories, which we describe referring to Figure 1: - **Unaffected:** These clients never use a deprecated method. None of the clients in Figure 1 belong to this category. - **Potentially affected:** These clients do not use any deprecated method, but should they upgrade their version, they would be affected. Client 1 in Figure 1 belongs to this category. - **Affected:** These clients use at least one method declared as deprecated, but do not change the API version throughout their history, as it happens in the case of Client 2. - **Affected and changing version:** These clients use at least one method declared as deprecated and also update their API version. Clients 3, 4, and 5 belong to this category. ![Figure 3. Deprecation status of clients of each API](Image) Figure 3 reports the breakdown of the clients in the four categories. The clearest pattern is that the vast majority of clients, across all APIs, **never use any deprecated method** throughout their entire history. This is particularly surprising in the case of Hibernate, as it deprecated most of its methods (we will discuss this in RQ3). Clients affected by deprecation vary from more than 20% for Easymock and Guava, to less than 10% for Hibernate, and barely any for Spring. Of these, less than one third also change their API version, thus highlighting a very static behavior of clients with respect to API usage, despite our selection of active projects. **Common reactions to deprecation.** We investigate how ‘Affected and changing version’ clients deal with deprecation. We exclude ‘Affected’ clients, since they do not have strong incentives to fix a deprecation warning if they do not update their API, as the method is still functional in their version. The ‘Affected and changing version’ clients of Easymock and Guava largely react to deprecated entities (71% and 65%). For Hibernate and Spring we see a similar minority of clients that react (31% and 32%). For all the APIs the relative number of clients that fix all calls made to a deprecated entity is between 16% and 22%. Out of the clients that react, we find that at the method level, the most popular reaction is to delete the reference to the deprecated method (median of 50% to 67% for Easymock, Guava and Hibernate and 100% for Spring). We define as deletion a reaction in which the deprecated entity is removed and no new invocation to the same API is added. Some Hibernate and Guava clients roll back to a previous version where the entity is not yet deprecated. Easymock, Guava and Hibernate clients tend to replace deprecated calls with other calls to the same API, however this number is small. Surprisingly, a vast majority of projects (95 to 100%) add calls to deprecated API elements, despite the deprecation being already in place. This concerns even the ones that end up migrating all their deprecated API elements later on. **The strange case of Guice.** We analyzed all the Guice projects and looked for usage of a deprecated annotation or method, however we find that none of the projects have used either. The reason is that Guice does not have many methods or annotations that have been deprecated. In fact, Guice follows a very aggressive deprecation policy: methods are removed from the API without being deprecated previously. We observed this behavior in the Pharo ecosystem as well, and studied it separately [21]. In our next research questions, we thus do not analyze Guice, as the deprecations are not explicitly marked. **RQ2: What is the scale of reaction in affected clients?** The work we partially replicate [5] measures the reactions of individual API changes in terms of commits and developers affected. Having exact API dependency information, we can measure API evolution on a per-API basis, rather than per-API element. It is hence more interesting to measure the magnitude of the changes necessary between two API versions in terms of the number of methods calls that need to be updated between two versions. Another measure of the difficulty of the task is the number of different deprecated methods one has to react to: It is easier to adapt to 10 usages of the same deprecation than it is to react to 10 usages of 10 different deprecated methods. **Actual reactions.** We measure the scale of the actual reactions of clients that do react to API changes. We count separately reactions to the same deprecated method and the number of single reactions. In Figure 1, client 3, after upgrading to v5 and before upgrading to v6, makes two modifications to statements including the deprecated method ‘boo’. We count these as two reactions to deprecation but count one unique deprecated method. We consider that client 5 reacts to deprecation, when rolling back from v5 to v4: we count one reaction and one unique deprecated method. We focus on the upper half of the distribution (median, upper quartile, 95th percentile, and maximum), to assess the critical cases; we expect the effort needed in the bottom half to be low. Table III reports the results. The first column reports the absolute number of non-frozen affected clients that reacted. The scale of reaction varies: the majority of clients react on less than a dozen of statements with a single unique deprecated method involved. Springs stands out with a median number of statements with reactions of 31 and the median number of unique deprecated methods involved of 17. Outliers, invest more heavily in reacting to deprecated methods. As seen next, this may explain the reluctance of some projects to update. Potential reactions. Since a large portion of project do not react, we wondered how much work was accumulating should they wish to update their dependencies. We thus counted the number of updates that a project would need to perform in order to make their code base compliant with the latest version of the API (i.e., removing all deprecation warnings). In Figure 1, the only client that is potentially affected by deprecation is client 1, which would have two statements needing reaction (i.e., those involving the method ‘foo’) in which only one unique deprecated method is involved. As before, we focus on the upper half of the distribution. Table IV reports the results. In this case the first column reports the absolute number of clients that would need reaction. We notice that the vast majority of clients use two or less unique deprecated methods. However, they would generally need to react to a higher number of statements, compared to the clients that reacted reported in Table III, except for those using Spring. Overall, if the majority of projects would not need to invest a large effort to upgrade to the latest version, a significant minority of projects, would need to update a large quantities of methods. This can explain their reluctance to do so. However, this situation, if left unchecked—as is the case now—can and does grow out of control. If there is a silver lining, it is that the number of unique methods to update is generally low, hence the adaptations can be systematic. Outliers would run in troubles, with several unique methods to adapt to. \[95 \text{th perc.}\] Table IV <table> <thead> <tr> <th>clients potentially needing reaction</th> <th>statements potentially needing reaction (unique deprecated methods involved)</th> <th>median</th> <th>Q3</th> <th>95th perc.</th> <th>max</th> </tr> </thead> <tbody> <tr> <td>Easymock 178</td> <td></td> <td>55 (1)</td> <td>254 (1)</td> <td>1,120 (5)</td> <td>4,464 (7)</td> </tr> <tr> <td>Guava 917</td> <td></td> <td>12 (1)</td> <td>42 (2)</td> <td>319 (7)</td> <td>8,568 (44)</td> </tr> <tr> <td>Hibernate 521</td> <td></td> <td>15 (1)</td> <td>35 (1)</td> <td>216 (2)</td> <td>17,471 (140)</td> </tr> <tr> <td>Spring 41</td> <td></td> <td>3 (1)</td> <td>4 (1)</td> <td>51 (2)</td> <td>205 (55)</td> </tr> </tbody> </table> Table V <table> <thead> <tr> <th>unique deprecated methods</th> <th>defined by API count % over total</th> <th>used by clients count % over all deprecated</th> </tr> </thead> <tbody> <tr> <td>Easymock</td> <td>124 20%</td> <td>16 13%</td> </tr> <tr> <td>Guava</td> <td>1,479 10%</td> <td>104 7%</td> </tr> <tr> <td>Hibernate</td> <td>7,591 65%</td> <td>487 6%</td> </tr> <tr> <td>Spring</td> <td>1,320 3%</td> <td>149 11%</td> </tr> </tbody> </table> Table V summarizes the results, including the total count of deprecated methods per API with proportion over the total count of methods and the count of how many of these deprecated methods are used by clients. APIs are not shy in deprecating methods, with more than 1,000 deprecations for Guava, Spring, or Hibernate. The case of Hibernate is particularly striking with 65% of unique methods being eventually deprecated, indicating that this API makes a heavy usage of this feature of Java. The proportion of deprecated methods that affect clients is rather low, around 10% in all 4 of the APIs. RQ3: What proportion of deprecations does affect clients? The previous research question shows that most of the actual and potential reactions of clients to method deprecations involves a few unique methods. This does not tell us how these methods are distributed across all the deprecated API methods. We compute the proportion of deprecated methods clients use. In Figure 1, there is at least one usage of deprecated methods ‘boo’ and ‘foo’, while there is no usage of ‘goo’. In this case, we would count 3 unique deprecated methods, of which one is never used by clients. most deprecated method call are reacted upon on the same day the call was either introduced or marked as deprecated. Barring outliers, reaction times in Hibernate and Spring are uniformly fast (the third quartiles being at 0 and 2.5 days). Reaction times are however longer for clients of Guava and Easymock, with an upper quartile of 47 and 200 days respectively. Outliers have a long reaction time, in the order of hundreds of days. RQ5: Do affected clients react similarly? Replacing a deprecated entity with an invocation to a non-deprecated one is a desirable reaction as the client of an API continues using it. This research question seeks to investigate the clients' behavior when it comes to replacement reactions. Such an analysis allows us to ascertain whether an approach inspired by Schäfer et al.'s [22] would work on the clients in our dataset. Their approach recommends API changes to a client based on common, or systematic patterns in the evolution of other clients of the same API. Consistency of replacements. There is no definite way to identify if a new call made to the API is a replacement for the original deprecated call, so we rely on a heuristic: We analyze the co-change relationships in each class file across all the projects; if we find a commit where a client removes a usage of a deprecated method (e.g., add(String)) and adds a reference to another method in the same API (e.g., add(String, Integer)), this new method invocation is a possible replacement for the original deprecated entity. A drawback is that in-house replacements or replacements from other competing APIs cannot be identified. Nonetheless, we compute the frequencies of these co-change relationships to find whether clients react uniformly to a deprecation. We found that Easymock has no systematic transitions: there are only 3 distinct methods for which there are replacements and the highest frequency of the co-change relationships is 34%. For Guava we find 23 API replacements; in 17% of the cases there is a systematic transition i.e., there is only one way in which a deprecated method is replaced by clients. Spring clients only react by deleting deprecated entities instead of replacing them, resulting in no information on replacements of features. In Hibernate, we find only 4 distinct methods where replacements were made. There were no systematic replacements and the maximum frequency is 75%. Since API replacements are rather uncommon in our dataset, with the exception of Guava, we find that while an approach such as the one of Schäfer et al. could conceptually be quite useful, we would not be able to implement it in our case due to the small amount of replacement data. Quality of documentation. There are very few clients reacting to deprecation by actually replacing the deprecated call with one that is not deprecated. This led us to question the quality of the documentation of these APIs. Ideally one would like to have a clear explanation of the correct replacement for a deprecated method, as in the Javadoc reported in Figure 5. However, given the results we obtained, we thought this could be not the case. We systematically inspected the Javadoc to see if deprecated features had documentation on why the feature was deprecated, and if there was an indication of appropriate replacement or whether a replacement is needed. We perform a manual analysis to analyze the quality of the API documentations. For Guava, we investigate all 104 deprecated methods that had an impact on clients; for Easymock, we look at all 16 deprecated methods that had impact on clients; for Spring and Hibernate, we inspected sample of methods (100 each) that have an impact on the clients. In Easymock, 15 of the 16 deprecated methods are instance creation methods, whose deprecation message directs the reader to using a Builder pattern instead of these methods. The last deprecation message is the only one with a rationale and is also the most problematic: the method is incompatible with Java version 7 since its more conservative compiler does not accept it; no replacement is given. In Guava, 61 messages recommend a replacement, 39 state the method is no longer needed and hence can be safely deleted, and only 5 deprecated methods do not have a message. It is also the API with the most diverse deprecation messages. Most messages that state a method is no longer needed are rather cryptic (“no need to use this”). On the other hand, several messages have more precise rationales, such as stating that functionality is being redistributed to other classes. Others provide several alternative recommendations and detailed instructions and one method provides as many as four alternatives, although this is because the deprecated method does not have exact equivalents. Guava also specifies in the deprecation message when entities will be removed (e.g., “This method is scheduled for removal in Guava 16.0.”, or even “This method is scheduled for deletion in June 2013.”). For Hibernate, all the messages provide a replacement, but most provide no rationale for the deprecation. The only exceptions are messages stating the advantages of a recommended database connection compared to the deprecated one. For Spring, the messages provide a replacement (88) or state that the method is no longer needed (12). Spring is the only API that is consistent in specifying in which version of the API the methods were deprecated. On the other hand, most of the messages do not specify any rationale for the decision, except JDK version testing methods that are no longer needed since Spring does not run in early JDK versions anymore. Overall, maintainers of popular APIs make an effort to provide their clients with high-quality documentation. We classify this as high quality documentation as there is sufficient support provided to clients. If we found rationales as to why a method was deprecated, this was far from systematic. Despite replacement being not the only suggested solution, it is the most common; this is in contrast to the actual behavior of clients. In spite of the good quality of the documentation, clients are far from likely to follow it. Summary of findings We first investigated how many API clients actively maintain their projects by updating their dependencies. We found that, for all the APIs, only a minority of clients upgrade/change the version of the API they use. As a direct consequence of this, older versions of APIs are more popular than newer ones. We then looked at the number of projects that are affected by deprecation. We focused on projects that change version and are affected by deprecation as they are the ones that show a full range of reactions. Clients of Guava, EasyMock and Hibernate (to a lesser degree) were the ones that were most affected, whereas clients of Spring were virtually unaffected by deprecation and for Guice we could find no data due to Guice’s aggressive deprecation policy. We also found that for most of the clients that were affected, they introduced a call to a deprecated entity, despite knowing that it was deprecated. Looking at the reaction behavior of these clients, we saw that ‘deletion’ was the most popular way to react to a deprecated entity. Replacements were seldom performed, and finding systematic replacements was rarer. This is despite the fact that APIs provide excellent documentation that should aid in the replacement of a deprecated feature. When a reaction did take place, it was usually almost right after it was first marked as deprecated. IV. Discussion We now discuss our main findings and contrast them with the findings of the Smalltalk study we partially replicate. Based on this, we give recommendations on future research directions. We also present threats to validity. A. Comparison with the deprecation study on Smalltalk Contrasting our results with those of the study we partially replicate, several interesting findings emerge: Proportion of deprecated methods affecting clients. Both studies found that only a small proportion of deprecated methods does affect clients. In the case of Smalltalk, this proportion is below 15%, in our results we found it to be around 10%. Considering that the two studies investigate two largely different ecosystems, languages, and communities, this similarity is noteworthy. Even though API developers do not know exactly how their clients use the methods they write and would be interested in this information [23], the functionalities they deprecate are mostly unused by the clients, thus deprecation causes few problems. Nevertheless, this also suggests that the majority of effort that API developers make in properly deprecating some methods and documenting alternatives is not actually necessary: API developers, in most of the cases, could directly remove the methods they instead diligently deprecate. Not reacting to deprecation. Despite the differences in the deprecation mechanisms and warnings, the vast majority of the clients in both studies do not react to deprecation. In this study, we could also quantify the impact of deprecation should clients decide to upgrade their API versions and find that, in some cases, the impact would be very high. By not reacting to deprecated calls, we see that the technical debt accrued can grow to large and unmanageable proportions (e.g., some Hibernate client would have to change 17,471 API invocations). We also found more counter-reactions (i.e., adding more calls to methods that are known to be deprecated) than for Smalltalk clients. This may be related to the way in which the two platforms raise deprecation warnings: In Java, a deprecation gives a compile-time warning that can be easily ignored, while in Smalltalk, some deprecations lead to run-time errors, which require intervention. Systematic changes and deprecation messages. The Smalltalk study found that in a large number of cases, most clients conduct systematic replacements to deprecated API elements. In our study, we find that, instead, replacements are not that common. We deem this difference to be extremely surprising. In fact, the clients we consider have access to very precise documentation that should act as an aid in the transition from a deprecated API artifact to one that is not deprecated; while this is not the case for Smalltalk, where only half of the deprecation messages were deemed as useful. This seems to indicate that proper documentation is not a good enough incentive for API clients to adopt a correct behavior, also from a maintenance perspective, when facing deprecated methods. As an indication to developers of language platforms, we have some evidence to suggest more stringent policies on how deprecation impacts clients’ run-time behavior. Clients of deprecated methods. Overall, we see in the behavior of API clients that deprecation mechanisms are not ideal. We thought of two reasons for this: (1) developers of clients do not see the importance of removing references to deprecated artifacts, and (2) current incentives are not working to overcome this situation. Incentives could be both in the behavior of the API introducing deprecated calls and in the restriction posed by the engineers of the language. This situation highlights the need for further research on this topic to understand whether and how deprecation could be revisited to have a more positive impact on keeping low technical debt and improve maintainability of software systems. In the following we detail some of the first steps in this direction, clearly emerging from the findings in our study. B. Future research directions If it ain’t broke, don’t fix it. We were surprised that so many projects did not update their API versions. Those that do are often not in a hurry, as we see for Easymock or Guice. Developers also routinely leave deprecated method calls in their code base despite the warnings, and even often add new calls. This is in spite of all the APIs providing precise instructions on which replacements to use. As such the effort to upgrade to a new version piles up. Studies can be designed and carried out to determine the reasons of these choices, thus indicating how future implementations of deprecation can give better incentives to clients of deprecated methods. Difference in deprecation strategies. In the clients that do upgrade, we can see differences between the APIs. We were particularly impressed by Spring, which has by far the most clients and also the least clients using deprecations. It appears that their deprecation strategy is very conservative, even if they deprecated a lot of methods. This may explain why much more Spring clients do upgrade their API version. Likewise, perhaps the very aggressive deprecation policy of Guice, which removes methods without warnings, has an impact on the vast majority of the clients that decide to stick with their version of Guice. We note that the APIs with the highest proportion of projects with deprecated calls are also the ones where the projects are least likely to upgrade. We did not investigate this further, as our focus was mostly on the behavior of clients, but studies suggesting API developers the best strategies for persuading clients to follow deprecation messages would be very informative for the actual practice of software evolution. Impact of deprecation messages. We also wonder if the deprecation messages that Guava has, which explicitly state when the method will be removed, could act as a double-edged sword: Part of the clients could be motivated to upgrade quickly, while others may be discouraged and not update the API or roll back. In the case of Easymock, the particular deprecated method that has no documented alternative may be a roadblock to upgrade. Studies can be devised to better understand the role of deprecation messages and their real effectiveness. C. Threats to validity Since we do not detect deprecation that is only specified by Javadoc tags, we may underestimate the impact of API deprecation in some cases. To quantify the size of this threat, we manually checked each API and found that this is an issue only for Hibernate before version 4, while the other APIs are unaffected. For this reason, a fraction of Hibernate clients could show not completely correct behavior. We considered crawling the online Javadoc of Hibernate to recover these tags, but we found that the Javadoc of some versions of the API were missing (e.g. version 3.1.9). Even though our findings are focused on the clients, for which we have a statistically significant sample, some of the results depend on the analyzed APIs (such as the impact of the API deprecation strategies on the clients). As we suggested earlier in this section, further studies could be conducted to investigate these aspects. The use of projects from GitHub leads to a number of threats, as documented by Kalliamvakou et al. [24]. In our data collection, we tried to mitigate these biases (e.g., we only selected active projects), but some limitations are still present. The projects are all open-source and some may be personal projects where maintenance may not be a priority. GitHub projects may be toy projects or not projects at all (still from [24]); we think this is unlikely, as we only select projects that use Maven; these are by definition Java projects, and, by using Maven, show that they adhere to a minimum of software engineering practices. Finally, we only look at the master branch of the projects. We assume that projects follow the git convention that the master branch is the latest working copy of the code [25]. However, we may be missing reactions to API deprecations that have not yet been merged in the main branch. V. RELATED WORK Studies of API Evolution. Several studies of API evolution have been performed, at smaller or larger scales. Most of these studies focused on the API side, rather than the client one as the one we conducted. For example, Dig and Johnson studied and classified the API breaking changes in 4 APIs [26]; they did not investigate their impact on clients. They found that 80% of the changes were due to refactorings. Cossette and Walker [27] studied five Java APIs in order to evaluate how API evolution recommenders would perform on these cases. They found that all recommenders handle a subset of the cases, but that none of them could handle all the cases they referenced. The Android APIs have been extensively studied. McDonnell et al. [28] investigate stability and adoption of the Android API on 10 systems; the API changes are derived from Android documentation. They found that the API is evolving quickly, and that clients have troubles catching up with the evolution. Linares-Vásquez et al. also study the changes in Android, but from the perspective of questions and answers on Stack Overflow [29], not API clients directly. Bavota et al. [30] study how changes in the APIs of mobile apps (responsible for defects if not reacted upon) correlate with user ratings: successful applications depended on less change-prone APIs. This is one of the few large-scale studies, with more than 5,000 API applications. Wang et al. [31] study the specific case of the evolution of 11 REST APIs. Instead of analyzing API clients, they also collect questions and answers from Stack Overflow that concern the changing API elements. Among the studies considering clients of API, we find for example the one by Espinha et al. [32], who study 43 mobile client applications depending on web APIs and how they respond to web API evolution. Also, Raemaekers et al. investigated the relation among breaking changes, deprecation, and semantic versioning [33]. They found that API developers introduce deprecated artifacts and breaking changes in equal measure across both minor and major API versions, thus not allowing clients to predict API stability from semantic versioning. Finally, previous work including one of the authors of this paper (i.e., [5] and [21]) are large-scale studies of API clients in the Pharo ecosystem. The first study focused on API deprecations, while the second one focused on API changes that were not marked as deprecations beforehand. Another work [34] analyze deprecation messages in more than 600 Java systems, finding that 64% of deprecated methods have replacement messages. Mining of API Usage. Studies that present approaches to mining API usage from client code are related to our work, especially with respect to the data collection methodology. One of the earliest works done in this field is the work of Xie and Pei [35] where they developed a tool called MAPO (Mining API usage Pattern from Open source repositories). MAPO mines code search engines for API usage samples and presents the results to the developer for inspection. Mileva et al. [36] worked in the field of API popularity; they looked at the dependencies of projects hosted on Apache and SourceForge. Based on this information they ranked the usage of API elements such as methods and classes. This allowed them to predict the popularity trend of APIs and their elements. Hou et al. [37] used a popularity based approach to improve code completion. They developed a tool that gave code completion suggestions based on the frequency with which a certain class or method of an API was used in the APIs ecosystem. Lämmel et al. [38] mine usages of popular Java APIs by crawling SourceForge to create a corpus of usage examples that forms a basis for a study on API evolution. The API usages are mined using type resolved Java ASTs, and these usages are stored in a database. Supporting API evolution. Beyond empirical studies on APIs evolution, researchers have proposed several approaches to support API evolution and reduce the efforts of client developers. Chow and Notkin [39] present an approach where the API developers annotate changed methods with replacement rules that will be used to update client systems. Henkel and Diwan [40] propose CatchUp!, a tool using an IDE to capture and replay refactorings related to the API evolution. Dig et al. [41] propose a refactoring-aware version control system for the same purposes. Dagenais and Robillard observe the framework’s evolution to make API change recommendations [42], while Schäfer et al. observe the client’s evolution [22]. Wu et al present a hybrid approach [43] that includes textual similarity. Nguyen et al. [44] propose a tool (LibSync) that uses graph-based techniques to help developers migrate from one framework version to another. Finally, Holmes and Walker notify developer of external changes to focus their attention on these events [45]. VI. CONCLUSION We have presented an empirical study on the effect of deprecation of Java API artifacts on their clients. This is a non-exact replication of a similar study done on the Smalltalk ecosystem. The main differences between the two studies is in the type systems of the language targeted (static type vs dynamic type) and the scale of the dataset (25,357 vs 2,600 clients). We found that few API clients update the API version that they use. In addition, the percentage of clients that are affected by deprecated entities is less than 20% for most APIs—except for Spring where the percentage was unusually low. Most clients that are affected do not typically react to the deprecated entity, but when a reaction does take place it is—surprisingly—preferred to react by deletion of the offending invocation as opposed to replacing it with recommended functionality. When clients do not upgrade their API versions, they silently accumulate a potentially large amount of technical debt in the form of future API changes when they do finally upgrade; we suspect this can serve as an incentive not to upgrade at all. The results of this study are in some aspects similar to that of the Smalltalk study. This comes as a surprise to us as we expected that the reactions to deprecations by clients would be more prevalent, owing to the fact that Java is a statically typed language. On the other hand, we found that the number of replacements in Smalltalk was higher than in Java, despite Java APIs being better documented. This leads us to question as future work what the reasons behind this are and what can be improved in Java to change this. This study is the first to analyze the client reaction behavior to deprecated entities in a statically-typed and mainstream language like Java. The conclusions drawn in this study are based on a dataset derived from mining type-checked API usages from a large set of clients. From the data we gathered, we conclude that deprecation mechanisms as implemented in Java do not provide the right incentives for most developers to migrate away from the deprecated API elements, even with the downsides that using deprecated entities entail. Given that there is currently a proposal to revamp Java’s deprecation system,[4] studies such as this one and its potential follow-ups are especially timely. [4]https://bugs.openjdk.java.net/browse/JDK-8065614
{"Source-Url": "https://pure.tudelft.nl/portal/files/10297101/icsme.pdf", "len_cl100k_base": 10994, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39101, "total-output-tokens": 11574, "length": "2e13", "weborganizer": {"__label__adult": 0.0003426074981689453, "__label__art_design": 0.0002384185791015625, "__label__crime_law": 0.000247955322265625, "__label__education_jobs": 0.0006775856018066406, "__label__entertainment": 4.500150680541992e-05, "__label__fashion_beauty": 0.00011897087097167967, "__label__finance_business": 0.000156402587890625, "__label__food_dining": 0.00023245811462402344, "__label__games": 0.0003709793090820313, "__label__hardware": 0.0003676414489746094, "__label__health": 0.00025773048400878906, "__label__history": 0.0001634359359741211, "__label__home_hobbies": 4.76837158203125e-05, "__label__industrial": 0.0001690387725830078, "__label__literature": 0.0002046823501586914, "__label__politics": 0.0001906156539916992, "__label__religion": 0.00033283233642578125, "__label__science_tech": 0.0020008087158203125, "__label__social_life": 8.082389831542969e-05, "__label__software": 0.0044708251953125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00021195411682128904, "__label__transportation": 0.00026035308837890625, "__label__travel": 0.00015282630920410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52556, 0.03268]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52556, 0.08377]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52556, 0.93765]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5194, false], [5194, 11613, null], [11613, 16891, null], [16891, 19219, null], [19219, 24844, null], [24844, 29482, null], [29482, 33820, null], [33820, 40261, null], [40261, 46457, null], [46457, 52556, null], [52556, 52556, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5194, true], [5194, 11613, null], [11613, 16891, null], [16891, 19219, null], [19219, 24844, null], [24844, 29482, null], [29482, 33820, null], [33820, 40261, null], [40261, 46457, null], [46457, 52556, null], [52556, 52556, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52556, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52556, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5194, 2], [5194, 11613, 3], [11613, 16891, 4], [16891, 19219, 5], [19219, 24844, 6], [24844, 29482, 7], [29482, 33820, 8], [33820, 40261, 9], [40261, 46457, 10], [46457, 52556, 11], [52556, 52556, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52556, 0.12903]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
299f62da99f879daa37aaf817e39d7d4b3bdb4b7
[REMOVED]
{"Source-Url": "https://www.pure.ed.ac.uk/ws/portalfiles/portal/251338524/Verified_Security_BAUEREISS_DOA22122021_AFV.pdf", "len_cl100k_base": 15813, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 74129, "total-output-tokens": 23272, "length": "2e13", "weborganizer": {"__label__adult": 0.0004553794860839844, "__label__art_design": 0.0005860328674316406, "__label__crime_law": 0.00049591064453125, "__label__education_jobs": 0.0004642009735107422, "__label__entertainment": 8.118152618408203e-05, "__label__fashion_beauty": 0.00021696090698242188, "__label__finance_business": 0.00040841102600097656, "__label__food_dining": 0.0003597736358642578, "__label__games": 0.0008287429809570312, "__label__hardware": 0.004901885986328125, "__label__health": 0.0004658699035644531, "__label__history": 0.0004167556762695313, "__label__home_hobbies": 0.0001456737518310547, "__label__industrial": 0.0008683204650878906, "__label__literature": 0.0002856254577636719, "__label__politics": 0.0003886222839355469, "__label__religion": 0.000690460205078125, "__label__science_tech": 0.08184814453125, "__label__social_life": 6.651878356933594e-05, "__label__software": 0.007320404052734375, "__label__software_dev": 0.89697265625, "__label__sports_fitness": 0.0003249645233154297, "__label__transportation": 0.0009436607360839844, "__label__travel": 0.00024044513702392575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 94263, 0.02959]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 94263, 0.30616]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 94263, 0.88026]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2762, false], [2762, 6021, null], [6021, 9399, null], [9399, 12077, null], [12077, 15393, null], [15393, 18838, null], [18838, 21801, null], [21801, 24932, null], [24932, 28086, null], [28086, 28847, null], [28847, 31888, null], [31888, 35110, null], [35110, 38082, null], [38082, 41347, null], [41347, 44243, null], [44243, 47574, null], [47574, 51029, null], [51029, 54253, null], [54253, 57550, null], [57550, 60865, null], [60865, 64113, null], [64113, 67374, null], [67374, 70852, null], [70852, 74032, null], [74032, 77251, null], [77251, 80321, null], [80321, 83886, null], [83886, 87534, null], [87534, 91158, null], [91158, 94263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2762, true], [2762, 6021, null], [6021, 9399, null], [9399, 12077, null], [12077, 15393, null], [15393, 18838, null], [18838, 21801, null], [21801, 24932, null], [24932, 28086, null], [28086, 28847, null], [28847, 31888, null], [31888, 35110, null], [35110, 38082, null], [38082, 41347, null], [41347, 44243, null], [44243, 47574, null], [47574, 51029, null], [51029, 54253, null], [54253, 57550, null], [57550, 60865, null], [60865, 64113, null], [64113, 67374, null], [67374, 70852, null], [70852, 74032, null], [74032, 77251, null], [77251, 80321, null], [80321, 83886, null], [83886, 87534, null], [87534, 91158, null], [91158, 94263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 94263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 94263, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2762, 2], [2762, 6021, 3], [6021, 9399, 4], [9399, 12077, 5], [12077, 15393, 6], [15393, 18838, 7], [18838, 21801, 8], [21801, 24932, 9], [24932, 28086, 10], [28086, 28847, 11], [28847, 31888, 12], [31888, 35110, 13], [35110, 38082, 14], [38082, 41347, 15], [41347, 44243, 16], [44243, 47574, 17], [47574, 51029, 18], [51029, 54253, 19], [54253, 57550, 20], [57550, 60865, 21], [60865, 64113, 22], [64113, 67374, 23], [67374, 70852, 24], [70852, 74032, 25], [74032, 77251, 26], [77251, 80321, 27], [80321, 83886, 28], [83886, 87534, 29], [87534, 91158, 30], [91158, 94263, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 94263, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
c1f9cbc187a5bad4b0bad1136cbe6233f8b6890e
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-54804-8_2.pdf", "len_cl100k_base": 9243, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 44934, "total-output-tokens": 11388, "length": "2e13", "weborganizer": {"__label__adult": 0.0002951622009277344, "__label__art_design": 0.0003495216369628906, "__label__crime_law": 0.0002982616424560547, "__label__education_jobs": 0.00042819976806640625, "__label__entertainment": 5.632638931274414e-05, "__label__fashion_beauty": 0.00014591217041015625, "__label__finance_business": 0.00019180774688720703, "__label__food_dining": 0.00029730796813964844, "__label__games": 0.0004284381866455078, "__label__hardware": 0.0005574226379394531, "__label__health": 0.0003962516784667969, "__label__history": 0.0002002716064453125, "__label__home_hobbies": 7.75456428527832e-05, "__label__industrial": 0.0003516674041748047, "__label__literature": 0.0002429485321044922, "__label__politics": 0.0002206563949584961, "__label__religion": 0.0004148483276367187, "__label__science_tech": 0.0190582275390625, "__label__social_life": 8.589029312133789e-05, "__label__software": 0.0069580078125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0002512931823730469, "__label__transportation": 0.0003783702850341797, "__label__travel": 0.00016379356384277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43942, 0.02582]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43942, 0.3608]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43942, 0.88885]], "google_gemma-3-12b-it_contains_pii": [[0, 2466, false], [2466, 5907, null], [5907, 9047, null], [9047, 12251, null], [12251, 15707, null], [15707, 18903, null], [18903, 22129, null], [22129, 25189, null], [25189, 27702, null], [27702, 29297, null], [29297, 30825, null], [30825, 34220, null], [34220, 37574, null], [37574, 40646, null], [40646, 43942, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2466, true], [2466, 5907, null], [5907, 9047, null], [9047, 12251, null], [12251, 15707, null], [15707, 18903, null], [18903, 22129, null], [22129, 25189, null], [25189, 27702, null], [27702, 29297, null], [29297, 30825, null], [30825, 34220, null], [34220, 37574, null], [37574, 40646, null], [40646, 43942, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43942, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43942, null]], "pdf_page_numbers": [[0, 2466, 1], [2466, 5907, 2], [5907, 9047, 3], [9047, 12251, 4], [12251, 15707, 5], [15707, 18903, 6], [18903, 22129, 7], [22129, 25189, 8], [25189, 27702, 9], [27702, 29297, 10], [29297, 30825, 11], [30825, 34220, 12], [34220, 37574, 13], [37574, 40646, 14], [40646, 43942, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43942, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
a56132948a1f6af14020bfde8195e67134adc69b
Supporting developers' coordination in the IDE Guzzi, Anja; Bacchelli, Alberto; Riche, Yann; van Deursen, Arie DOI 10.1145/2675133.2675177 Publication date 2015 Document Version Accepted author manuscript Published in Citation (APA) https://doi.org/10.1145/2675133.2675177 Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Supporting Developers’ Coordination in The IDE Anja Guzzi; Alberto Bacchelli Delft University of Technology Delft, The Netherlands {a.guzzi, a.bacchelli}@tudelft.nl Yann Riche Microsoft Redmond, WA, USA yannr@microsoft.com Arie van Deursen Delft University of Technology Delft, The Netherlands {arie.vanDeursen}@tudelft.nl ABSTRACT Teamwork in software engineering is time-consuming and problematic. In this paper, we explore how to better support developers’ collaboration in teamwork, focusing on the software implementation phase happening in the integrated development environment (IDE). Conducting a qualitative investigation, we learn that developers’ teamwork needs mostly regard coordination, rather than concurrent work on the same (sub)task, and that developers successfully deal with scenarios considered problematic in literature, but they have problems dealing with breaking changes made by peers on the same project. We derive implications and recommendations. Based on one of the latter, we analyze the current IDE support for receiving code changes, finding that historical information is neither visible nor easily accessible. Consequently, we devise and qualitatively evaluate BELLEVUE, the design of an IDE extension to make received changes always visible and code history accessible in the editor. Author Keywords Developers’ coordination; IDE extension; qualitative study. ACM Classification Keywords D.2.6 Software Engineering: Programming Environments INTRODUCTION Software engineering is often a team effort. It is not unusual for hundreds of professionals to collaborate to design, build, evaluate, and maintain software systems [71]. However, teamwork remains one of the most difficult and pervasive problems of software engineering, and developers face a plethora of teamwork problems at different levels [18]. Key to this teamwork are tools and processes that revolve around it, source code management, and development. Most of developers’ time is spent within the Integrated Development Environments (IDE) [48], thus researchers are trying to leverage them by augmenting their collaborative capabilities (e.g., [17, 27, 32, 33]). Nevertheless, the IDE remains a tool that primarily helps individual programmers to be more effective in the classical edit-compile-run cycle [73]. In this paper, we explore how to better support collaboration in teamwork within the IDE. Our research is set up in two phases: An exploratory investigation, followed by the design and evaluation of a medium fidelity click-through prototype. In our investigation, we explored how developers in large development teams experience collaboration and identify problems they face when working in team. To this end, we (1) conducted a brainstorming session with 8 industry experts working on the development of IDE solutions at Microsoft; (2) identified three core opportunities for the design of improved collaborative solutions, which we refined through semi-structured interviews with developers; and (3) interviewed 11 professional developers with various degrees of experience and seniority, from 9 different companies to contextualize those opportunities. In our investigation, we report how, while our participants reported collaborating with others on code, they spend a limited amount of time actively engaged in collaboration. As a consequence, one of the core issue emerging revolves around managing dependencies between activities, rather than working together contemporarily on the same (sub)task. Our interviews with developers also confirm that issues arise due to the lack of information (i.e., imperfect information) when working on shared code, and uncover existing strategies to prevent or resolve them. Dependencies are often mediated through code, in the form of code changes. Yet, our investigation illustrates how dealing with code changes remains a common source of issues despite existing tools and strategies when working on the same project. This emerges as one of the main causes of frustration in interviewees’ experience of teamwork. From our findings we derive implications and recommendations for improving coordination in modern IDEs. In the second phase of this study, we focus on an investigation of how to improve handling teams code changes from within the IDE. Using common usability heuristics [56], we describe opportunities to better support teamwork in the IDE by supporting information needs about change history. Consequently, we leverage this analysis to: (1) derive requirements for an IDE extension, (2) describe the design of BELLEVUE, a prototype fulfilling these requirements, and (3) iteratively evaluate a design called BELLEVUE with 10 senior developers from different companies and backgrounds. BELLEVUE makes incoming code changes always visible during development, eases the use of that history in the context of the developer’s tasks and flows. It shows historical This paper is a pre-print of: Anja Guzzi, Alberto Bacchelli, Yann Riche, Arie van Deursen. Supporting Developers Coordination in The IDE. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work, CSCW 2015, March 14–18, 2015, Vancouver, BC, Canada. Copyright 2014, Software Engineering Research Group, Department of Software Technology, Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology. All rights reserved. No part of this series may be reproduced in any form or by any means without prior written permission of the publisher. information within the active code editor to let users modify the code without context switch. To achieve this, Bellevue takes already available historical change data and offers an interactive view that shows detailed historical information for files and code chunks with respect to a previous version. EXPLORATORY INVESTIGATION: METHODOLOGY In this section, we define the scope of our research, and our research methodologies (illustrated in Figure 1), divided into the following main steps: brainstorming, semi-structured interviews, and data analysis with card sorting. Scoping We scoped our initial focus by tapping in the collective knowledge of eight industry experts engaged in the design and implementation of development tools (including one of the co-authors, and the first author as researcher intern). To do so, we organized a brainstorming on the challenges and opportunities revolving around team development. The brainstorming led to the identification of the following areas for further investigation: awareness (i.e., knowing other people’s work to anticipate issues and to proactively seek out information when needed), work dependencies (i.e., anticipating and understanding who I depend on and who depends on me), and breaking changes (i.e., anticipating what and whom will be impacted by the breaking change I am about to release). As we will explore in more depth later, those areas describe times where lack of information can lead to resource- and time-consuming situations. Such situations are common not only in development situations, but in teamwork in general where situation of imperfect information is the norm [38]. Consistently, literature reported that developers often have difficulties facing questions such as: “What have my coworkers been doing?” “How have resources I depend on changed?” “What information was relevant to my task?” “What code caused this program state?” [7, 26, 45, 68]. In our work, we iterate on this by providing a more in-depth view of some of those specific problems, and by illustrating and validating ways of addressing them within the flow of activities developers engage in. Semi-structured Interviews To gather more details about the context in which those issues emerge, and current strategies for addressing them, we conducted semi-structured interviews [50] with professional developers. In total, we interviewed 11 developers from a varying in experience, team sizes, and companies. Table 1 summarizes interviewees’ characteristics. We conducted each 90-min interview on the phone, and transcribed the interviews for latter analysis. After each interview, we analyzed the transcript and split it into smaller coherent units (i.e., blocks expressing a single concept), which we summarized by using an interview quotation or an abstractive sentence. In addition, the interviewer kept notes (i.e., memos) of relevant and recurring observations in a document iteratively refined and updated. Out of the interviews, 56 memos emerged. <table> <thead> <tr> <th>ID</th> <th>Overall experience</th> <th>In current team</th> <th>Role</th> <th>Team size</th> </tr> </thead> <tbody> <tr> <td>D1</td> <td>7.5 years</td> <td>4.5 months</td> <td>dev</td> <td>4</td> </tr> <tr> <td>D2</td> <td>10 years</td> <td>6 months</td> <td>dev lead</td> <td>7</td> </tr> <tr> <td>D3</td> <td>2 months</td> <td>2 months</td> <td>junior dev</td> <td>4</td> </tr> <tr> <td>D4</td> <td>1.5 years</td> <td>1.5 years</td> <td>sql dev</td> <td>5</td> </tr> <tr> <td>D5</td> <td>20 years</td> <td>10 years</td> <td>senior dev</td> <td>4</td> </tr> <tr> <td>D6</td> <td>25+ years</td> <td>7 years</td> <td>senior dev</td> <td>15</td> </tr> <tr> <td>D7</td> <td>21 years</td> <td>2 weeks</td> <td>senior dev</td> <td>10</td> </tr> <tr> <td>D8</td> <td>25+ years</td> <td>10 years</td> <td>senior dev</td> <td>16</td> </tr> <tr> <td>D9</td> <td>1 year</td> <td>1 years</td> <td>dev</td> <td>5</td> </tr> <tr> <td>D10</td> <td>15 years</td> <td>5 years</td> <td>dev lead</td> <td>5</td> </tr> <tr> <td>D11</td> <td>20 years</td> <td>6 years</td> <td>senior dev</td> <td>11</td> </tr> </tbody> </table> Table 1. Interviewed developers To analyze our data, we created the 562 cards from the transcribed coherent units and the memos. Each card included: the source (transcript/memo), the interviewee’s name (if from the transcript), the unit/memo content, and an ID for later reference. We used card sorting [69] to extract salient themes, leveraging a combination of open and closed card sorts. After macro categories were discovered, we re-analyzed their cards to obtain a finer-grained categorization. Finally, we organized the categories using affinity diagramming [52]. EXPLORATORY INVESTIGATION: DATA In this section, we present the results of our exploratory investigation. When quoting raw data, we refer to the individual cards using a [DX.Y] notation, where X represents the developer, and Y (optional) the card (e.g., [D2.03] refers to card 03 from the interview with D2). Teamwork from the developers’ perspectives According to the interviewees, collaboration in teamwork is defined by a wide spectrum of activities: Communication - Collaboration is communication: As D8 said: “It is all about communication. If you have good communication, you have good collaboration.” Communication can be both one-to-one and one-to-many, can be formal or informal, and goes through different channels. Channels are “conventional,” such as email and instant messaging (IM), but also more domain specific ones, such as interactions via project management tools (e.g., source code management systems and requirement tracking systems) and source code; as D4 explains: “to communicate typically we use IM, but we also have an internal wiki that we use” [D4.02, D7.09, D8.(01,02)]. Helping each other by sharing information - As D9 said: “Collaboration is just sharing information and ideas.” In particular, according to interviewees, collaboration means being proactive and sharing useful information to make it easier for others to complete their work (e.g., “make [the acquisition of information] as easy as possible on the other co-workers, so that they don’t have to struggle” [D7]). This aspect of collaboration involves voluntarily sending notifications (e.g., “FYI—for your information” messages) and starting discussions (e.g., “let’s coordinate on this change...”) Knowing what others know - Collaboration, from the interviews’ perspective, also means to stay aware of the experts on the different parts of the system (i.e., “the domain experts”) and to understand artifacts and code changes done by colleagues. D11 explains: “[collaboration] is keeping track of what everybody is working on and being able to know how the different pieces are in place”. According to interviewees, knowing what the others know has the aim both of preventing problems and of reacting faster when they occur. [D2.01, D4.03, D7.04–07, D11.47] Working on the same goal, doing different things - Overall, developers see collaboration as working toward the same goal (e.g., product releases), by acting in parallel on different parts of the same project (e.g., working separately on different code artifacts): “Collaboration is we are all working towards the common goal, but maybe working on different parts of it, but these components do interact” [D7]; “[Collaborating meant] we divided up the work […], we went off these directions, and as things started to merge together, we go on [merging] on a case by case base.” [D3]. [D1.2, D3.01, D4.01, D7.01, D9.(02,06)] Dealing with imperfect information in teamwork To investigate how developers deal with imperfect information, we outlined three concrete teamwork scenarios in which the existence of imperfect information can generate problems. The scenarios were derived from teamwork situations commonly described as problematic in literature. Efficient Task Assignment Scenario. One developer is assigned to a task, while another is already working on a similar or related task. This introduces inefficiencies in the team (and potential collisions). Related literature. Software engineering researchers have recognized task partition and allocation as important endeavors in the context of teamwork [44, 46]. If dependencies and common traits among tasks are not correctly handled, developers can reduce their efficiency and generate unexpected conflicts [37]. Literature suggests different techniques, with varying results, for efficient task assignment (e.g., [16, 24]). In particular, the assignment of bug fixes (or new features to implement), from a repository of issues or requests to the most appropriate developers, is one of the main instances of task assignment investigated by researchers [1]. Researchers reported that bug triaging is a time consuming task, which requires non-trivial information about the system, and that often leads to erroneous choices [2]. A number of techniques has been devised to tackle the triaging problem, e.g., [53, 41]. Interviews’ outcome. Although considered realistic, our participants did not see this scenario as a core issue. In fact, the task assignment processes of teams are in general considered effective to prevent the problematic scenario to take place. In some teams supervising figures (e.g., managers) do the task assignment (e.g., “[a new task] goes to a manager, who decides whom to assign” [D8], and “the boss will tell you about a task [to do]” [D9]); in the other teams, tasks are assigned during team meetings, with various degrees of developers’ interaction (e.g., “we are using daily SCRUM meetings” [D1], and “we break up the code, and if the [task] is in your code, it’s yours” [D5]). Simultaneous Conflicting Changes Scenario. Developers find themselves in a situation where there is a merge conflict (i.e., when different people are touching the code at the same time). Researchers consider breaking changes related literature. A recent significant effort in the software engineering research community is devoted to detect concurrent modifications to software artifacts (e.g., [35, 36, 59, 65]). In fact, developers making inconsistent changes to the same part of the code can cause merge conflicts when changes are committed to the code repository, which leads to wasted developers’ efforts and project delays [13, 42, 66]. Grinter conducted one of the first field studies that investigated developers’ coordination strategies [29]. She observed that it is sometimes difficult for developers (even for expert ones) to merge conflicting code without communicating with the other person who worked on a module. Later, de Souza et al., in an observation of the coordination practices of a NASA software team, observed that developers in some cases think individually trying to avoid merging, while in others think collectively by holding check-ins and explaining their changes to their team members [21]. Interviews’ outcome. Our participants reported only rarely encountering a situation where more than one person was working on the same file at the same time (“we don’t run into those situations a lot” [D2]). Most of our participants’ teams were organized to make developers work on different parts of the system, with a key person in charge of coordinating development to prevent those issues, typically a lead developer or an architect. Some participants also used technical solutions to avoid concurrent edits (e.g., “We put a lock on [the file], so it does not get edited [by others]” [D1]). When a merge conflict happens, our participants reported resolving it quickly and easily (e.g., “The best and quickest solution you have is to undo, we roll back and [fix it],” [D1]; “typically, it is solved really quick” [D2]), and often using merging tools (e.g., “we don’t have to do much manually” [D8]). Although automatic merging was used, our participants also explained that they manually checked each conflict, revealing that it is not entirely trusted. Breaking changes Scenario. A developer/team changed some parts of the code, rendering the work of another developer or team unusable or obsolete. Related literature. Researchers consider breaking changes problematic, not only for developers who receive the change and have to understand and adapt their code, but also for developers who are making a change that might break the functionalities of many clients [3]. Literature shows investigations (e.g., [22]) on the effect of dependencies that break other people’s work, and proposed methods to address subsequent problems, at the scale of both single system (e.g., [10, 72]) and ecosystems (e.g., [32, 61]). Interviews’ outcome. The reaction of our participants was different according to the origin of the breaking changes. When the origin was considered to be external to the team or company or when participants felt they had opportunity for neither intervening nor displaying their disappointment to ‘the breaker’, they accept the breaking changes with no strong negative emotions but rather as inevitable. This happened even when resolving the issue might take long time (e.g., “more than year” [D2]) or when it resulted in large operational or maintenance costs (e.g., “this [break] was costing the company many thousands of dollars per minute.” [D7]). However, when the origin was internal to the company/team, participants reported strong negative emotions (e.g., frustration). This seemed in part due to the mismatch between the expected communication that is made possible by being in the same company/team, and the “waste” of time spent finding the cause of the issue, which might in turn be resolved relatively quickly (e.g., “I spend a couple of hours to find out the error [...] fixed in 5 minutes.” [D3] and “I spent a day fixing the problem I spent three days finding.” [D8]). Generally, breaking changes leading to syntactical errors were not considered an issue, because they could easily be spotted with static analysis tools (e.g. pre-compilers) and fixed. In effect, those particular breaks were considered as a direct consequence of the lack of coordination effort from the person introducing the breaking code [D1]. Some of our participants insisted that breaking changes when the origin of the break is internal to the team or company should be handled more smoothly and proactively. For example, some would prefer stricter rules to avoid internal breaking changes: “people breaking other’s people code [...] I’d like to see management being more rigorous about it” [D8]. Receiving a code change Managing internal breaking changes is the most problematic scenario that emerged from our analysis, in this section, we analyze how developers deal with changes made by peer developers working on the same project. Our participants reported that they investigated changes in the code-base when they were notified of them (e.g., via automatic emails from versioning systems). However, they mostly did so to discover whether they had an impact on their own code. While they are not interested in building a holistic view of the whole code base, they use this approach to discover whether the changes will impact their own work. In doing so, they first need to assess how relevant the change is to their current or upcoming work to decide whether or not to investigate further. In some rare occasions, developers use this opportunity to explore other’s work not as it impacts theirs, but from a style/aesthetics perspective, looking at coding styles, norms, approaches, and solutions, especially is changes are emitted by a respected colleague (for learning/inspiration) or a novice (for peer reviewing). When our participants reported discovering an error caused by a change made by someone else, their most prominent complaint regarded the lack of coordination that they felt should have accompanied the change (e.g., they would have expected an “heads up” email). However, in the case of clear syntactic errors (e.g., compilation errors, or runtime errors generating a stack trace), participants did not feel the kind of frustration they expressed in the case of semantic errors (e.g., caused by a library that changes some sorting order, therefore impacting the dependent code) or unclear alteration in the behavior. In fact, semantic errors required participants to perform a deeper and more time-consuming analysis to understand their cause [D3.47,49], D8.28,29,36. Once they found the cause, they explained they proceeded to measure the impact on their code by, for example, measuring how many files or lines of code needs to be changed (as D8 explained: “I measure the impact of a change [looking at] how many files/lines it affects. A few? Hundreds?”). Usually, the developers receiving the breaking change were those automatically responsible of adapting and fixing their own code. However, when a change had a deep impact on the code-base, and required more information about the change (e.g., the rationale of the change) and the code-base, developers usually wanted to contact the author. Participants also reported that, when the change introduced a defect, those receiving were responsible for deciding to file an issue report against the change to the change author (e.g., defect, those receiving were responsible for deciding to file). Participants also reported that, when the change introduced a defect, those receiving were responsible for deciding to file an issue report against the change to the change author (e.g., defect, those receiving were responsible for deciding to file). Participants also reported that, when the change introduced a defect, those receiving were responsible for deciding to file an issue report against the change to the change author (e.g., defect, those receiving were responsible for deciding to file). Once they found the cause, they explained they proceeded to measure the impact on their code by, for example, measuring how many files or lines of code needs to be changed (as D8 explained: “I measure the impact of a change [looking at] how many files/lines it affects. A few? Hundreds?”). Usually, the developers receiving the breaking change were those automatically responsible of adapting and fixing their own code. However, when a change had a deep impact on the code-base, and required more information about the change (e.g., the rationale of the change) and the code-base, developers usually wanted to contact the author. Some participants mentioned that the lack of testing contributes to fault changes being committed to the repository (e.g., “we are really bad at testing [...] you pull and you get a file you try to run and it fails” [D9], “If I’d tested it better, I wouldn’t have put [this code] in the build” [D5]). Nevertheless, they also warned that running all the tests for each change would be too expensive (“all tests, to run them all, it would take 3 weeks. Unfeasible to take 3 weeks for each check in” [D6]). They also warned that testing performed on a setup might unreliable on a different one (“we test and it’s all good, but then they test on their end and it might break [...]. It’s something to do with customizing.” [D2]), and that many semantic changes could not be detected by tests (“even if there are tests that check [most] things, you’d still end up with edge cases. [...] You still need to see that you break, and then react, and then fix it” [D6]). EXPLORATORY INVESTIGATION: INTERPRETATION Teamwork Collaboration Is Coordination The terminology used in many disciplines [51] defines coordination as “managing dependencies between activities,” and collaboration as “peers working together.” In this light, what participants consider as collaboration in teamwork is mostly coordination, needed to organize the individual work toward achieving a shared goal. By analyzing the data from the interviews, coordination emerged—after individual work—as the dominant interaction level when working in team, rather than collaboration. In particular, our participants described that: 1. They spend most the time doing individual work; 2. Most of their interaction is to coordinate (e.g., through daily stand-ups); 3. In their work, collaboration happens infrequently and on a need basis (e.g., with (bi-)weekly sprint meetings); 4. Most of the time, their intention for collaboration is coordination leading to individual work. By abstracting the explanations of interviewees, we model developers’ interaction in three levels (from lowest to highest degree of interaction): individual work, coordination, and collaboration. Individual work corresponds to no interaction (e.g., a developer working on her own), while collaboration means developers working together at the same time on the same (sub)task (e.g., pair programming). Coordination is an intermediate level of interaction, where developers interact but are not working together on the same (sub)task (e.g., during task assignment). An activity is a working state that can be placed as a point on the corresponding level of interaction. In a set of activities done to accomplish a certain subtask (i.e., ‘working situation’), particular events often serve as a transition between levels of interaction, for example, steps from individual work to coordination (e.g., “[when a file is locked] we just [ask]: ‘hey what are you working on? And then when you think I can do it?’ to the author” [D2.10]), and from coordination to collaboration (e.g., “sometimes we [...] get together and talk about [similar things], then realize how we can do these things together and do them together” [D1.45]). Figure 2 depicts our model of developers’ interactions. Figure 2. Model of developers’ interactions in working situations Figure 2 shows two working situations: In the first (WS1), a developer doing individual work asks another developer to make a change in their code (e.g., “I asked one of the guys: ‘[...] I need a method that would return [a special object], can you write it for me?’ He was able to write [it] and knew exactly where to go” [D3.09,15]). This requires going from individual work (A1) to coordination (A2) when asking to make a change to the other, and back to individual work (A3) when they reach an agreement, without reaching a state of collaboration. In the second situation (WS2), two developers decide to work together on a subtask. This requires moving from individual work (A4) to coordination (A5) when they decide, then to collaboration (A6) for the actual work. The steps between the different levels of interaction in the model are not necessarily discrete: Intermediate interaction... levels can be reached. For example, while the activity of task assignment can generally be placed on the coordination level, when the task assignment is discussed together in a meeting it can be put on a level between coordination and collaboration. Implications Our participants report that most of their time is spent in doing individual work, while, unexpectedly, they report to spend very little time collaborating on the same subtask. A direct consequence is that interactions revolving around coordination are a more actionable area, with better research opportunities and with greater potential impact, than areas considering purely the collaboration aspects. For example, better support to communication would have more relevance than tools for concurrent real-time coding. In addition, techniques for supporting information sharing among developers should take into account that developers spend most of their time doing individual work. Considering that most of this individual work is spent in the development environment (the IDE) [48], tools that support coordination within the IDE have potential to lead to greater impact. The role of information. In our study, we uncovered how available information was pivotal in transitioning between levels of interaction (Figure 3). This happened when our participants acted on information they had already acquired earlier, reacted to incoming information, or sought out to gather new information, for example, through communication or by understanding code changes done by colleagues. Researchers have been studying the importance of information sharing for teamwork over the years from different angles. For example, Cataldo et al. introduced the notion of socio-technical congruence, and provided evidence that developers who share information when making changes in dependent software components complete tasks in less time [15]. Other researchers similarly showed that missing information correlates with coordination problems and project failures [14, 63, 47]. Ko et al. analyzed the information needs of developers in collocated teams and found they have difficulties in answering questions related to information about colleagues’ work and software component dependencies [45]. From our interviews, developers reported to know how to deal with the investigated scenarios involving imperfect information, except when they received an internal breaking change. We suggest that this is connected to how easy it is to access the information they need to address the problem. Analyzing the ways developers/teams successfully deal with a condition of imperfect information, we see that the solutions to the problematic scenarios require information to be shared in two ways: (1) via direct communication (e.g., during a meeting), and (2) by making it visible (e.g., in a tool). <table> <thead> <tr> <th>Scenario</th> <th>Needed Information</th> <th>Visible</th> </tr> </thead> <tbody> <tr> <td>Task assignment</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>Simultaneous changes</td> <td>✗</td> <td>✓</td> </tr> <tr> <td>Breaking changes</td> <td>✗</td> <td>✗</td> </tr> </tbody> </table> Table 2. Information in investigated scenarios Table 2 shows that for non-problematic scenarios, the needed information is communicated or visible. In the case of task assignment, the inefficiencies are avoided by centralizing the task assignment to the team leader, who has all the information “visible” in mind, or by conducting group meetings in which the information is communicated. Other researchers report evidence of this behavior: Begel et al. described that industrial program managers and developers have regular team meetings to effectively prioritize bugs and to coordinate component completion schedules [8]; and Guzzi et al. reported that when open source software developers meet in person, they manage to advance the coordination of their projects better [31]. In the case of simultaneous changes that were not avoided with the team policies (i.e., through modularization and technical locks), the information necessary to solve the merge conflict is immediately visible through the merge tool. In their analysis of parallel development practices, de Souza and Redmiles similarly reported that issues were averted through the mediation of configuration management tools [22]. In the case of breaking changes, we suggest that the needed information is neither communicated in time nor easily accessible/visible. As a result, developers can spend a long time finding the information they need to coordinate. This is in agreement with other studies that report how breaking changes are due to missing information and lead to significant loss of time (e.g., [61]). Implications Our analysis showed that the efforts spent in gaining the information developers are missing can be a source of negative emotions. This underlines the importance of information sharing practices, both active (e.g., communicated) and passive (e.g., visible via a tool). Researchers proposed a number of tools (e.g., Palantir [65] and FASTDash [9]) to detect merge conflicts and tested them in laboratory settings with seeded conflicts. These tools helped developers to spend less time to resolve conflicts and encouraged communication. An interesting venue for future research is to verify the overall impact of these tools on teams whose structure maps the software architecture, as our participants reported not encountering this issue. In addition, in most of our investigated scenarios, we observed that—unexpectedly—developers already had means to deal with missing information, or did not considered these scenarios as issues. In contrast, the results of the study by deSouza and Redmiles put in evidence the significant differences that two unrelated companies have when they deal with the management of dependencies and the underlying information sharing [22]. This suggests that what is considered a critical issue for a company/project could not be important for another. As a consequence, it is important, when investigating potential problems generated by lack of information, to first study whether and how the target developers already employ any method to supply this missing information. Changes and dependencies As de Souza and Redmiles explained: “it is not possible to study changes in software systems without studying dependencies” [22]. In this light, our analysis of coordination and receiving changes is related to the work by Begel et al. [8] and by de Souza and Redmiles [22]. Begel et al. conducted a survey of Microsoft developers, testers, and program managers to see how these coordinate on dependencies (e.g., tasks) within the same team and among different teams. The study reported that most Microsoft developers (63%) minimize code dependencies to mitigate problems. This is similar to our interviewees who use software modularity to avoid inefficient task assignment or merge conflicts. Similarly to our findings, Begel et al. also reported that lack of communication often led to coordination problems, and that email is the common means by which developers kept track dependencies. In contrast, our study outlines the difference between internal and external dependencies and changes. Begel et al. found that internal dependencies are managed by “send[ing] an email and pay[ing] a personal visit to the person blocking their work,” [8], and surveyed developers do not report any negative emotion. Our findings underlined that, in the case of internal breaking changes, the process preceding the communication with the person blocking the work (i.e., finding the source of the problem) is cause of dissatisfaction and frustration in cases where the expected communication did not take place. Moreover, the two studies present different definitions of external dependencies and breaking changes: (1) According to Begel et al., dependencies are ‘external’ if in different teams within the same company, with which it is possible to communicate personally; (2) according to our findings, dependencies are ‘external’ if in different companies, with which it is extremely difficult to communicate. In the former case, Begel et al. reported that developers have to maintain personal communication with external teams to remain updated of changes, and the existence of unexpected changes from external teams generates anxiety. In the latter case, our interviewed developers did not report anxiety (even though unexpected changes happen and lead to loss of time), rather acceptance of the situation as part of the natural business model of the industry. In their work, de Souza and Redmiles investigated the strategies software developers use to handle the effect of software dependencies and changes in two industrial software development teams [22]. The two teams deal with internal dependencies according to our definition. One team (MVP) allows parallel development and the modularity of the system is low, the other team (MCW) focuses on modularity by using a reference architecture. Our interviewed developers have complaints similar to those in the MCW team, which also has strikingly similar practices: In both studies these teams avoid inefficient task assignment with modularity, their developers have problems identifying their impact network (they do not know who is consuming their code or whether changes can modify the component they depend on) and are only interested in changes in particular parts of the architecture that impact them. Moreover, developers in both MCW and our study have expectation that major changes are accompanied by notifications about their implications, yet are worried about information overload resulting from too frequent notifications. Conversely, the MVP practices seems to align with our participants’ description of an ideal scenario where emails are sent to update about changes, everybody reads notification emails, management urges developers to notify about breaking changes, and such email even suggest courses of action to be taken to minimize the impact. As a result, despite the parallel development, coordination in MVP seems smoother than in our participants’ experiences. One important characteristic of MVP, mentioned by de Souza and Redmiles, is that most developers have worked on the project for at least two years, and their experience could also be the cause of the difference with MWC, which is a newer project. Our results, though, do not seem to corroborate this hypothesis, since interviewed developers reported similar issues regardless of project maturity and personal experience. Our additional analysis of code changes looks at coordination from a low level perspective; we found that most information developers need to coordinate is typically available, but not necessarily accessible. Implications Our study confirms that lack of coordination leads to late discovery of unexpected errors or behaviors in the code, followed by a time-consuming analysis to find the code changes that are the source of the issue. This calls for better support for coordination when developers make and receive changes, and for when they need to investigate a change to determine its impact. As the existing body of research suggests, impact analysis and support for change understanding in software development scenario remains problematic. Research prototypes have not yet reached widespread usage in the IDE, and our findings underlines the substantial practical relevance of further research in these areas. The differences between coordination practices between our interviewees’ teams and the MVP team described by de Souza and Redmiles [22] are an interesting venue for future research. In fact, compelling is the hypothesis that the modularity adopted by our interviewees’ teams and MWC could create asymmetries in engineers’ perceptions of dependencies [30], thus being at the basis of the differences and generating the reported coordination issues. By investigating how developers currently handle received code changes in the IDE, we realized that they do many tasks manually, and spend a lot of effort to collect and remember change information. The data that would help developers in their tasks is available (e.g., data recorded by the versioning systems), but not easily accessible. This implies that better support for integrating change information in the IDE is needed and it would impact development and coordination. DESIGNING AND EVALUATING BELLEVUE Figure 4. The research method applied in the second phase Building upon our findings from our interviews, we aimed to design a tool to help developers anticipate, investigate, and react to breaking changes in a collaborative development environment. Figure 4 outlines the process. Design requirements We first analyzed the current approaches for receiving changes in the IDE under the light of widespread usability heuristics [56] (Point 1 in Figure 4). We found several unmet heuristics that, together with the data collected in the exploratory investigation, we used as a basis to derive requirements for our IDE extension to improve receiving changes and support teamwork (Point 2). Recognition over recall “Memory for recognizing things is better than memory for recalling things” [49]. Once a developer decides to merge a received change with the local version, the information about the integrated change disappears. For this reason, when developers encounter a bug, they must recall which change occurred and whether any of them could have generated the problem. One participant explained that the frustration when he encounters a bug comes from “figuring out where the problem is: Trying to figure out what really has changed” [D5]. We suggest that when looking for the cause of a bug, developers’ memory can be aided by tools to navigate change history, but existing tools require to switch from the current development context, and typically give the information outside of the development context. Visibility of system status “The system should always keep users informed about what is going on” [56]. Once changes are integrated, development tools provide no distinction between code already present before merge, and the newly integrated one. Therefore, there is no clear visibility of the system status with respect to its history. “It’s kind of impossible to know every single line of code that everybody else on your team changed” [D3]. While historical information is available, it typically resides in dedicated tools or views, out of the current development context, thus the status is neither self-evident nor easily accessible: “there isn’t really an easy method [...] that let you see [that] these ten files are different from what you had in your current time” [D5]. Clearly Marked Exits “A system should never capture users in situations that have no visible escape” [55]. In software engineering, code repositories typically provide change history which gives developers an escape: If they find something not working after they merged some changes into their local working copy, they can roll back to the status prior to the merging. Problems with this approach are: (1) The exits are not evident, and (2) the exit strategy is binary. The first issues means developers sometimes do not realize that their problems could be addressed by undoing the merge, instead of trying to find an error in their own code. The second issue means that developers can only undo all the merged changes at once, although the error can be caused by a mistake in a small fraction of the changed code. Once the code is rolled back, developers have to reconsider all the undone changes and realize which ones could have caused the error, without having the full IDE context at disposal, but only the change information, and integrate all the unrelated changes back again. D1 explained: “It’s a loss of time, we have to roll back, figure out [what the problem was], and roll again. It’s a loss of time, definitely.” Help and Documentation “[Documentation should] be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large” [56]. In development processes, documentation also consists in the explanations software developers write as they commit their changes to the shared repository. It also includes other sources of information, such as descriptions of work items or bugs, stored in bug management or work item management tools. These pieces of information are accessible to the developer, and the commit messages are available to inspect upon receiving code changes, but once the changes are integrated they disappear, unless the user performs a number of steps navigating the history of the code in specialized windows or applications. Additionally, comments in the code commit and the work management tools are often disjointed. For example D5 complained: “When you get the latest [changed files] you get tons of files”; he found it very difficult to search the necessary help or information due to information overload. Finally, when developers integrate more than one commit into their local copy, often they see only the last commit message, even though a line of code could have been changed several times between their local copy and the current status. Help users recognize, diagnose, and recover from errors Current code change management in IDEs make it difficult to recognize and diagnose errors generated by integrated code changes, because they are not visible and the history has to be analyzed outside of the current development context. One interviewed developer explained that, despite the availability of external history tools, “one of the problems is trying to figure out what really has changed [and] what’s the impact on your code” [D5]. In fact, as D3 explained, external tools are not helpful because “version control gives you a list of files that changed and not the specific lines”: Seeing exactly which part changed and how takes many steps. Moreover, the only possibility to recover from errors is to do a complete undo of the merged changes, while it might be be enough to modify a small part of code to fix the error. System design requirements To address our current concerns with imperfect or missing information in development tasks, we suggest the following requirements for development tools: (1) Received code changes should always be visible, (2) Information should be provided in context, both semantic (code) and procedural (history, project) without undue actions by the user, both at the project and file level (3) History of code chunks should be easily accessible, possibly using progressive disclosure to prevent information fatigue (4) Error identification ad diagnostics should be supported through a fluid integration of code history, (5) Code changes should be reversible at the sub-file level, and (6) Requiring context switches to acquire the necessary knowledge to solve a task should be avoided. Prototype and evaluation Consequently, we devised an IDE extension, named BELLEVUE, to fulfill the requirement outlined above and to serve as a tool to explore our preliminary design ideas (Point 3 in Figure 4). The prototype allowed us to communicate our ideas to various experienced designers and practitioners at Microsoft, and to get their feedback, reveal early problems, and improve the initial concept (Point 4 in Figure 4). We devised a detailed storyboard including a high-fidelity prototype of BELLEVUE (Point 5). This was implemented as a PowerPoint presentation with a sequence of believable action steps of interaction with the prototype. Each step was devised to let the participants of the evaluation phase observe what was happening, explain what they would do, and describe the effects they would expect as a consequence of their actions. We used this prototype to evaluate BELLEVUE with professional software developers, using the RITE (Rapid Iterative Testing & Evaluation) method [54], to evaluate and identify problems in the prototype, quickly fix them, and then empirically verify the efficacy of the fixes (Point 6). Participants in the RITE study were selected among a population with the following characteristics: More than three years as a professional developer, more than one year in the current company, and more than three months in the current team. Moreover, interviewees had to spend at least 20 hours per week on writing and editing code, their team had to use a source control management system, and they had to have at least browsed the change history, encountered a change merge, or used the file diff comparison view in the month before the RITE. Evaluation invitees were thanked for their participation by a gratuity in the form of Microsoft software. Each session occurred in a full usability lab on the Microsoft campus, and was video recorded for later analysis. To mitigate the moderator acceptance bias [28], we explained that the researcher guiding the session did not create the product. Moreover, to mitigate any social desirability bias [28], and to encourage discussion, the storyboard plot was describing the actions taken by a proxy developer named James. Following the storyboard plot described by the slides and the researcher, participants were solicited to follow a think-aloud protocol, and indicate what they saw, would do, and would expect as a result of their actions on each screen page. After 9 iterations we reached a stable and validated design. At the end of the process (Point 7 in Figure 4), we had: (1) the finalized BELLEVUE prototype, (2) a set of changes to implement but that were not eventually integrated, and (3) a set of candidate aspects to be investigated as future work. We designed BELLEVUE as a prototype code editing and navigation solution aimed at being a lightweight, ready to be used, without requiring developers to change their working style. It takes the historical change information that is already available, but currently neither visible nor easily accessible, and displays it in a non-obtrusive way. BELLEVUE offers an interactive view that shows detailed historical information for files and specific chunks with respect to a previous version. We detail the features of BELLEVUE, as they were at the end of the RITE phase, and the feedback from participants (mentioned as R1–9). The final version of the slide-deck used in the RITE is available as a file accompanying this paper. Recognizable changed files and blocks BELLEVUE decorates changed files with an arrow (Figure 5, Point 1), and denotes changed lines with a blue<sup>2</sup> colored sign, both at a fine-grained granularity (Point 2), to see them in the context of the current text window, and a more coarse-grained one (Point 3), to see them in the context of the entire file. One can decide (Point 4) to see only the files that were just merged into the current local version. This design supports recognition over recall: Once new changes are merged into the local version, their traces remain visible. It also enhances the visibility of the system status, with respect to changes. RITE participants’ feedback—All the participants appreciated this feature. In particular, they liked that it helps filtering out irrelevant information when looking for the reason of an error that could have been introduced by a received change: “Knowing what I can ignore is huge, the larger the project, the more beneficial it comes” [R1]. Concerning the way in which changes are made recognizable, some users did not find it intuitive, or appropriate: “I’d prefer a bar or something <sup>1</sup>Also available at: [http://www.st.ewi.tudelft.nl/~guzzi/](http://www.st.ewi.tudelft.nl/~guzzi/) <sup>2</sup>This color has been chosen because it is currently considered a neutral color in the IDE. As opposed to green or red, which are often associated to versioning systems or debuggers. much more visible [than a blue-colored sign] to see that it’s different’ [R2]. Nevertheless, after they continued in the scenario and experienced the following features of BELLEVUE, they withdrew their concerns. Some participants suggested to let the users personalize the color to denote changes; other participants suggested to use different colors to clearly distinguish added, removed, or modified lines, as it currently happens in tools that display code differences. Visible changes’ effect To show the effect of the change in the code, the user can hover on any colored block to see the latest changes. For example, in Figure 6, the user decided to look at the changed block that was not visible in Figure 5. Then, by hovering on the colored sign on the left (Point 5), (s)he can see the effect of that change: The argument of the RemoveAt method call has changed (Point 6), and the Refresh method call has replaced a call present before on the same object (Point 7). RITE participants’ feedback—This feature was introduced in the third iteration of the tool, after considering the feedback received by the first participants. As an example, one participant had some expectations when hovering the lines indicating a change: “toggle to highlight what’s different from the last version, to quickly diagnose, I don’t need a true side by side” [R3]. Once introduced, this feature was well received by all the remaining participants (e.g., “ok, good! I can see here how [this part] changed!” [R6]), because it also helps with the progressive disclosure of the information about the changes: Users can quickly verify whether the changes seem relevant and, only if necessary, investigate more. **Accessible historical details** In BELLEVUE the user can see the code history of any block that was changed with respect to the previous local version. This is achieved with one click on the colored sign on the left of the interesting block. For example, in Figure 7, the user decided to further inspect the history of lines 142–143 because they led to an unexpected behavior. Once the block is clicked, a pane appears from the bottom (Point 8): It contains the historical details of the changes happened to that block since the last update of the user. Each item represents a change and shows the changed code with respect to the previous commit (Point 9), the commit message to document it (Point 10), and the contact information of the change author (Point 11). The changed code only regards the chosen block, but it is possible to extend it by clicking on the ‘...’ before and after. It is also possible to inspect also previous history (Point 12). RITE participants’ feedback—As for the other steps, before showing what would happen next, the interviewer asked the participants how they would interact with the design and what their expectations would be. In particular, for this feature, the interviewer asked what participants expected it would happen by clicking on the colored sign on the left (Point 5). In this way, we learned that the participants wanted to have something similar to a diff with the previous version (e.g., “I’d do a compare versus the previous version, and just look at those particular changes” [R3]). The BELLEVUE solution was, thus, very much appreciated and it often exceeded their expectations: “All the details! This is exactly what I was looking for: It tells me who [...] and it tells me what each one, and how long ago!” [R1]; “oh I see, so this is exactly what I was looking for. It’s even better!” [R8]. Seeing the version that could have introduced the error (i.e., #9044) was a clearly marked exit: Some participants considered to recover the error by reverting that particular change, because that would not imply reverting entirely to a more complex change set. Through the iterations, we added the clickable revision number (to open a panel to see all the changes in a revision), and the hovering function to show the full commit comment. Participants’ suggestions that we did not eventually include in the iterative evaluation, due to time reasons, mostly regarded the possibility of selecting specific parts to see the history, instead of the contiguous block (e.g., “I want to see the whole function [history, by] right clicking on a function” [R2]). **Editable code** BELLEVUE allows editing code while reviewing the history (Figure 8), because it integrates history within the active editing context. It also highlights the new code differently (Point 13 in Figure 9 and automatically adds a new item to the list (Point 14) to put it in its historical context. This differs from current approaches for visualizing history, which involve opening a distinct context or application, and do not make it possible to edit code (e.g., to fix a bug) and see history in the same context, at the same time. **RITE participants’ feedback**—This feature was very well received by all the participants. In particular, many were positively surprised and realized the possibilities of having code changes better integrated in the IDE: “I have a diff view, but I am not trapped in that […] I got my editor and my diff view, so the split view is very very helpful [...]. Let me do what I want to do, while looking at the information I needed to make my change” [R1]; “Now that I see, I know what is happening [...]. That is intuitive to me: Just clicking, edit, and go” [R7]. They also appreciated the immediate feedback of the change in the local history (Figure 9): “Oh, I like it shows it’s local” [R4]. ![Figure 8. Editable code while accessing historical details](image) ![Figure 9. New local change added to history](image) **RITE participants’ feedback**—The social interaction within the view was extremely well received by all the participants. They especially appreciated the possibility of quickly using email and IM: “I really like that. I’d click on chat” [R6]. When discussing the email they would write to the author of the buggy change, they all specify the things they would like to ideally see in the email, and when they see it, they like how it includes everything they wanted: “That is perfect. [It is] exactly what I would have sent” [R1]. However, some would have liked to obtain the diff view as BELLEVUE shows it in the IDE, while “now it’s like standard diff” [R6]. Participants’ suggestions not integrated for time reasons are: Adding an email all feature, change the email title to give information about method and class in which the new change is taking place, support for copy and paste from history to email, and add communication clients (e.g., IRC or Skype). ![Figure 10. Contacting the author of a change from the IDE](image) Evaluation Debriefing After each RITE session, participants filled two short questionnaires about their experience with the tool: A System Usability Scale (SUS) questionnaire [11] and a proprietary 7-point Likert scale questionnaire standardly used at Microsoft. The SUS answers were overall positive: The mean SUS score is 85.1 (answers had $\sigma = 0.66$, on average), which is considered a high value across different domains [5, 6, 67]; as an example, the statement “I think that I would like to use this product frequently” scored 4.7/5.0 ($\sigma = 0.50$). The proprietary survey was equally positive: Mean score was 5.4/7.0 (the higher the better: items only included positive wordings [40]), with $\sigma = 1$ on average. For example participants gave 5.4/7.0 ($\sigma = 1.13$) to the statement “This product has powerful functionality and excels at what it was designed for” and “This product is something I am likely to share information about” scored 5.9/7.0 ($\sigma = 0.78$). COLLABORATIVE SOFTWARE DEVELOPMENT TOOLS Coordination in software development has been studied in the fields of Software Engineering and Computer Supported Cooperative Work since the 1980s, and researchers have produced a wide range of analyses and tools [62]. BELLEVUE uses historical change information to support developers’ coordination. Sarma et al. present a comprehensive review of coordination tools and defines a framework that classifies those technologies according to multiple coordination paradigms [66]. In this framework, tools such as versioning systems and issue tracking systems support the development process and are at the basis of the more sophisticated tools that provide meaningful and automatically aggregated information: These are research prototypes and industrial applications conceived to better support developers coordination in the IDE. Such tools includes full-fledged platforms, specific workspace awareness solutions, information discovery approaches, and code information visualization tools. Full-fledged platforms, such as Jazz [39] and Mylyn [25], are at the far end of the spectrum in terms of complexity [66], and aim at transforming the IDE experience. Jazz, or Rational Team Concert, is an IDE platform, built on top of Eclipse and Visual Studio, that integrates several aspects of the software development process, including integrated planning, tracking of developer effort, project dashboards, reports, and process support. Relations between artifacts can be defined and leveraged to gather project information. Jazz also offer support for communication within the IDE (e.g., instant messaging), more advanced than BELLEVUE. Mylyn and its successor, Tasktop Dev [70], are based on Eclipse and Visual Studio and use task context to improve the productivity of developers and teams [43]; for example, they reduce information overload by providing developers with just the artifacts and information necessary for their current code modification task, and offer a comprehensive task repository to support teamwork by sharing information on tasks and their context. Both platforms support the creation of novel information (e.g., tasks and work items, and relations among artifacts) to support developers productivity, and encourage a task or work item based approach to evolution. BELLEVUE aims at using available data and visualizing it in a non-obtrusive way. Another example of improved communication in the IDE is REmail [4], which integrates developers’ email communication in the IDE to support program comprehension; REmail can be used in conjunction with BELLEVUE to extend the communication feature of the latter. Workspace awareness solutions, such as Palantir [64], Light house [19], CollabVS [23], Syde [34], and Crystal [12] are concerned with changes before they are committed to the source code repository, to address the conflict detection or real-time development information. For example, Syde tracks fine-grained real-time changes and alerts developers on the code editor and on a view when potential conflicts are emerging. Given the goal of these tools, differently from BELLEVUE, they do not show change history related information. Interestingly, BELLEVUE design could be included in environments such as Mylyn and Jazz, and could be used concurrently with workspace awareness tools, in order to offer coordination support from a complementary perspective. Information discovery approaches, such as Ariadne [20] and Tesseract [63], seek and assemble information to perform tasks such as expert finding and socio-technical network analysis. Recommender systems, such as Seahawk [57, 58], exploit change information and externally generated data to support software development and comprehension. Similarly to BELLEVUE some of these approaches also use historical code information to inform their users. Given their goal, they offer different, complementary views on data and integration with the development environment. Code information visualization tools include the “blame” functionality offered, for example, by git or svn. This feature allows to see who did the last change on each line of code of a file, and when. Another tool is the concept presented by Rastkar and Murphy, in which the developer is able to see for a summary of commit messages connected to a line of code in the IDE [60]. In contrast, BELLEVUE offers an interactive view that shows detailed historical information for specific chunks with respect to a previous version. BELLEVUE always displays which files and lines changed, so it does not require the developer to actively ask for the commit message of the line, because the developer may not be already aware of the relevance of the file and the line. In our exploratory investigation narrowing down a breaking change to the file and line causing the issue emerged as one of the most problematic and time-consuming efforts for developers. FINAL REMARKS In our study we explored how to support developers’ collaboration in teamwork. We focused on teamwork in the software implementation phase, which takes place in the IDE, and we conducted a qualitative investigation to uncover actionable areas for improvement. We identified internal breaking changes as one of the most important areas for improvement, because current IDE support for receiving changes is not optimal. Consequently, we designed BELLEVUE to enable developers better coordinate, by making historical information visible and more accessible in the IDE. Overall, this paper makes the following main contributions: 1. A qualitative analysis indicating that teamwork needs mostly regard coordination, that developers are able to face scenarios considered problematic in literature, and that dealing with breaking changes is hard, but it only generates frustration if the breaker is internal to the project. 2. Recommendations on how to improve collaboration in teamwork in the software implementation phase, such as to focus on interactions revolving around coordination rather than on collaboration on the same (sub)task. 3. Requirements for a tool to support teamwork based on currently unmet usability heuristics and the results of our qualitative analysis. For example, to favor recognition of code changes over recall, and to increase the visibility of the codebase status with respect to received changes. 4. The design and evaluation of BELLEVUE, an IDE extension to support teamwork by improving the integration of code changes in the IDE. BELLEVUE makes received changes visible inside the editor, and makes the history of code chunks easily accessible using progressive disclosure. ACKNOWLEDGMENTS We want to express our gratitude to the anonymous reviewers, whose valuable comments significantly helped to improve the paper. We warmly thank Andrew Begel for his first-class support during Anja’s internship. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/9303152/cscw2015.pdf", "len_cl100k_base": 13724, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 54796, "total-output-tokens": 19018, "length": "2e13", "weborganizer": {"__label__adult": 0.0003399848937988281, "__label__art_design": 0.000316619873046875, "__label__crime_law": 0.0002440214157104492, "__label__education_jobs": 0.0020198822021484375, "__label__entertainment": 4.804134368896485e-05, "__label__fashion_beauty": 0.00013637542724609375, "__label__finance_business": 0.0002865791320800781, "__label__food_dining": 0.0002887248992919922, "__label__games": 0.0004372596740722656, "__label__hardware": 0.00041961669921875, "__label__health": 0.000316619873046875, "__label__history": 0.00017189979553222656, "__label__home_hobbies": 8.32676887512207e-05, "__label__industrial": 0.0002186298370361328, "__label__literature": 0.0002199411392211914, "__label__politics": 0.0002396106719970703, "__label__religion": 0.0003533363342285156, "__label__science_tech": 0.002429962158203125, "__label__social_life": 0.00011932849884033204, "__label__software": 0.00460052490234375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002644062042236328, "__label__transportation": 0.00038743019104003906, "__label__travel": 0.00018358230590820312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 81602, 0.03562]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 81602, 0.28199]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 81602, 0.92145]], "google_gemma-3-12b-it_contains_pii": [[0, 816, false], [816, 6359, null], [6359, 12501, null], [12501, 16013, null], [16013, 22451, null], [22451, 28744, null], [28744, 34548, null], [34548, 41183, null], [41183, 46048, null], [46048, 52677, null], [52677, 56307, null], [56307, 59383, null], [59383, 65898, null], [65898, 72955, null], [72955, 80598, null], [80598, 81602, null]], "google_gemma-3-12b-it_is_public_document": [[0, 816, true], [816, 6359, null], [6359, 12501, null], [12501, 16013, null], [16013, 22451, null], [22451, 28744, null], [28744, 34548, null], [34548, 41183, null], [41183, 46048, null], [46048, 52677, null], [52677, 56307, null], [56307, 59383, null], [59383, 65898, null], [65898, 72955, null], [72955, 80598, null], [80598, 81602, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 81602, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 81602, null]], "pdf_page_numbers": [[0, 816, 1], [816, 6359, 2], [6359, 12501, 3], [12501, 16013, 4], [16013, 22451, 5], [22451, 28744, 6], [28744, 34548, 7], [34548, 41183, 8], [41183, 46048, 9], [46048, 52677, 10], [52677, 56307, 11], [56307, 59383, 12], [59383, 65898, 13], [65898, 72955, 14], [72955, 80598, 15], [80598, 81602, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 81602, 0.0625]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
f8d94b3fe524de917593b11bf45341aa3a89d582
Open Environments To Support Systems Engineering Tool Integration: A Study Using the Portable Common Tool Environment (PCTE) Dave E. Eckhardt, Jr. Langley Research Center Hampton, Virginia Michael J. Jipping Hope College Holland, Michigan Chris J. Wild and Steven J. Zeil Old Dominion University Norfolk, Virginia Cathy C. Roberts Institute for Computer Applications in Science and Engineering Langley Research Center Hampton, Virginia The use of trademarks or names of manufacturers in this report is for accurate reporting and does not constitute an official endorsement, either expressed or implied, of such products or manufacturers by the National Aeronautics and Space Administration. Acknowledgments We appreciate the help of David Green from Lockheed Engineering and Sciences Corporation, who installed and maintained the PCTE software, and Kathryn Smith, Denise Jones, Carrie Walker, and Steve Young from Langley, who provided in-house expertise with the three tools used in this study. We also want to thank John Turkovich of the Charles Stark Draper Laboratory, Inc., who provided an ASCII representation of the data flow diagrams produced by CSDL CASE. Abstract A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. This paper describes the project and the features of PCTE used, summarizes experience with the use of Emeraude environment over the project time frame, and addresses several related areas for future research. Introduction Background With the rapid development of digital processing technology, NASA programs have become increasingly dependent on the capabilities of complex computer systems. Current flight control research, which advocates active controls (ref. 1) and fully integrated guidance and control systems (ref. 2), relies heavily on digital processing technology. These advanced guidance and control systems, designed to optimize aircraft performance, will demand high-throughput, fault-tolerant computing systems. Additionally, safety concerns will dictate that future generations of commercial aircraft have hardware and software systems with extremely low failure rates such that catastrophic failures are extremely improbable, that is, such failures are "not expected to occur within the total life span of the whole fleet of the model (ref. 3)." The functional performance, reliability, and safety of these systems are of great importance to NASA; thus, a component of the research within the NASA Aeronautics Controls and Guidance Program is directed toward the development of design, assessment, and validation methodologies for flight-critical systems (ref. 4). An important aspect of this work is developing the engineering tools that support cost-effective certification of future flight systems. The state of the art of this technology was a primary issue of discussion at a workshop on digital systems technology held at Langley Research Center. The consensus of this representative sample of the U.S. aerospace industry was that there is a "lack of effective design and validation methods with support tools to enable engineering of highly integrated, flight-critical digital systems (ref. 5)." Design methods are generally fragmented and do not support integrated performance, reliability, and safety analysis. There is a growing recognition that such integrated studies will require an integrated design and evaluation environment. A primary purpose of such an environment is to achieve a level of integration of the diverse support tools used in the system development. Ideally, the environment is "open," as distinguished from "proprietary," in which case the integration of foreign tools is difficult at best. The integration function of a computer-aided software engineering (CASE) environment can be split into three areas: data, control, and presentation integration (ref. 6). In addition to these areas, a fourth area, which deals with process integration, is emerging as a critical functionality that can also be provided by the environment (ref. 7). Data integration can be achieved by exchanging data between tools directly or by storing the data in a shared project directory. A central repository for project information facilitates configuration management and tends to define project information structures that are independent of the specific tools used to manipulate this information. Control integration allows tools to coordinate their activities to maintain consistency between the information managed by each tool. A well-known example is the UNIX makefile system, which ensures that an executable program is generated from the latest versions of the source code and the "include" files. The purpose of presentation integration is to provide a uniform user interface to the services provided in the environment. Process integration deals with supporting the dynamics of software development by defining, managing, and certifying the set of activities across the software life cycle. Generally, the two approaches used to achieve tool integration are tool collections and an integrated project support environment (IPSE). The tool collection approach represents the state of practice. Here, the emphasis is on the tools themselves. This approach acknowledges a variety of tools on the market with a variety of mechanisms for working together and identifies the lowest common denominator of services for tools and support-specific end-user activities. The services provided and activities supported are typically no more than what is provided by the operating system. An IPSE provides a common infrastructure into which tools can be embedded. The reference model for an IPSE is the "toaster" model shown in Figure 1 (ref. 7). This model defines a set of services within a framework. The message-server network allows communication between different tools and services in the environment. Typically, this service builds on or extends the communications services provided by the underlying operating system. The user interface is typically provided by one of the emerging window management standards such as Motif (ref. 8). Task management, data repository, and data integration services are provided by the IPSE. By fitting into the "slots" of the toaster model, the tools are integrated and can work together efficiently. The largest roadblock to integration is a lack of widely accepted standards. Vendors have invested time and money into their own integration techniques and move slowly to discard or revamp their investment for standards that are not yet widely accepted. The result is a host of integration "standards," very few of which are compatible with each other. An overview of major standardization efforts can be found in reference 9. Of the tool-oriented standards, a recent standard from the Object Management Group (OMG) has emerged with implementations by Sun Microsystems, Inc. and Hewlett-Packard Co. (ref. 10). Of the IPSE standards, the Common Ada Programming Support Environment (APSE) Interface Set (CAIS) (ref. 11) and the Portable Common Tool Environment (PCTE) (ref. 12) are two standards that address the whole IPSE reference model. The PCTE is a European Computer Manufacturers Association (ECMA) standard tool-building framework that is gaining widespread support both in Europe and the United States. Objectives A primary research thrust of the Systems Architecture Branch at Langley Research Center is to develop the computer-aided technology for safety-critical software and high-performance architecture systems for advanced aircraft avionics. This work is motivated by the belief that computer automation techniques are eminently possible through focused research on application-specific domains and that these automation methods will result in significant gains in productivity, quality, and safety. To support this research, a project was initiated to evaluate an open environment software infrastructure as the framework for this design technology. This paper describes the use of the PCTE version 1.5 Public Interface Standard as implemented by Emeraude, a French company. This project emphasized the Object Management Services (OMS), by far the most significant feature of PCTE, and the tool encapsulation facilities of the Emeraude environment. The long-range goal of this work is to define the role and requirements of an open framework for developing highly integrated, flight-critical computer systems. Frameworks are vaguely defined but generally refer to environments for the communication and integration of tools in a process (ref. 13). Accommodating the entire design process is a recent emphasis of these environments, as opposed to the previous emphasis on the tools themselves, which were often tools with proprietary interfaces. To better understand the role that an open environment can play in a tool integration context, a study was conducted in which three existing software products were encapsulated and used in a coordinated design process. These tools had not previously been used in a coordinated manner. Two of these products, CSDL CASE (ref. 14), a computer-aided software engineering design tool from the Charles Stark Draper Laboratory, Inc., and InquisiX (ref. 15), a software reuse tool from Software Productivity Solutions, Inc., are alpha-release versions. The third tool, ADAS (ref. 16), an architecture design and analysis tool, is a commercial-off-the-shelf tool. The objectives of the study were to 1. Demonstrate through a simple but realistic example the value of an open environment in facilitating a coordinated design process; because members of the project team did not have previous experience with the Emeraude environment, a small demonstration project was the fastest way to confront open environment issues in the tool integration context 2. Demonstrate the encapsulation and integration of existing software tools through a shared, common infrastructure; although productivity and reliability advantages exist for building new tools in an open environment framework, many existing tools serve useful functions and are not likely to be replaced in the foreseeable future 3. Identify areas for further research in the open environments that are needed to support the development of automated design technology Project Description Demonstration Context To meet the objectives outlined for this study, part of the study reported in reference 17 was reproduced and automated. That study examined the performance of various architectures for large-grain data flow parallelism. The architectures studied involved an array of processors interconnected to a scheduler and a work load generator. A typical architecture is shown in figure 2. The performance of an architecture was tested using a data flow graph depicting the major software processing elements and the order of execution as determined by the data flow between processing elements. Previously, the data flow graph was converted by hand into a textual representation that was read by the SpawnProcess component shown in figure 2. The hand generation of different data flow work loads was one of the most burdensome tasks in the original study. Although CASE tools were available to generate these data flow diagrams, the outputs of the tools were not compatible with the inputs to the simulation tool used in the performance analysis. An objective of the project, then, was to investigate the direct use of the output of an existing CASE tool used by software designers as input to the simulation tool used by the architecture designers. Essentially, the architecture studies would have the benefit of using real software work loads, although for this study, the actual work loads defined in reference 17 were used. An overview of the demonstration project is shown in figure 3. For this demonstration, the Software Designer generates the software work load data flow graphs using the CSDL CASE software design tool. These diagrams are used to generate different textual representations of work loads for input to the ADAS simulation tool. Because many work loads could be generated for testing the performance of proposed architectures, ideally the work loads would be classified and stored in a reuse library. The application of the InQuisiX-reuse library tool would further demonstrate the capabilities of the open environment for integrating engineering tools. After the work load information was cataloged by the Reuse Librarian, the InQuisiX tool could be used by the Architecture Designer to browse the data base and select the work load with the desired attributes to use with the ADAS tool. Tool Set The CASE tool used for generating the data flow diagrams was developed by the Charles Stark Draper Laboratory, Inc. (CSDL) under contract to Langley Research Center. This tool is oriented toward the aerospace controls engineer and can generate Ada code and documentation directly from engineering block diagrams of the control algorithms. Figure 4 shows a block diagram for a yaw-damper algorithm consisting of first-order lags (FOLAG), washout filters (WOUT), switches (SWITCH), and limiters (lim), which can all be retrieved from a library. As originally developed, the CSDL CASE output consists of the automatically generated Ada code and supporting documentation. The engineering diagrams are stored in internal libraries and are not available for other engineering tools. One of the first tasks was to specify a generic data flow representation and to task CSDL to produce this format from the internal representation within the CSDL CASE tool. When this task was accomplished, the data flow diagrams could be used with other engineering tools. The availability of InQuisiX, developed by Software Productivity Solutions, Inc. under Small Business Innovative Research (SBIR) contracts, offered another dimension for study within the scope of this project. InQuisiX can define a taxonomy for a set of objects and store and retrieve objects according to this classification scheme. Many of the features of the classification scheme of InQuisiX are available as part of the object management facilities of PCTE. However, InQuisiX does provide a user interface to search the object base for those objects that match user-specified criteria. A comparable searching facility was not available in the Emeraude implementation of PCTE, so InQuisiX was selected for the study to provide this capability. In discussions of the role of InQuisiX for this project, it was generally felt that the development of InQuisiX would have been greatly simplified if access to the common services provided by PCTE were available. ADAS is a discrete-event simulation tool marketed by CADRE Technologies, Inc. This tool allows the user to graphically define a system model, run a discrete-event simulation of the system, and view the simulation as it progresses. Additionally, ADAS also provides tables of the results that can be further analyzed. This tool has proven to be effective for measuring the performance of proposed parallel architectures for aerospace applications (ref. 17). **Project Implementation Using PCTE** **PCTE Services** Figure 5 illustrates the following major services offered by the Emeraude implementation of PCTE version 1.5: 1. The most significant aspect of PCTE is the **Object Base**, which is the common repository of all data in PCTE. The Object Base is a typed, persistent store. 2. The **Metabase** is that portion of the Object Base devoted to describing the contents of the remainder of the Object Base. In practice, the Metabase is a collection of objects that describes the data types of the objects in the Object Base. 3. The primary operations for accessing the Object Base are provided by the PCTE **Object Management System** (OMS). The OMS provides tools that can create, examine, and alter objects in the Object Base. 4. The **Execution/Communication** services include support for distribution of the Object Base and for interprocess communication. 5. The **Metabase Services** are operations that use the OMS to examine and update the Metabase. Examples include operations to create new types and to determine the type of an object. 6. **Version Management Services** are available for all objects in the base. 7. **Data Query Management Services** allow programs to formulate searches of the Object Base. Not all the services listed above were used in this project. The Execution/Communication services were largely irrelevant because the evaluation copy of the Emeraude environment obtained for this project was limited to a single network node. The Data Query Management facilities currently lack an interactive interface in the Emeraude environment, thus appear to be relatively inaccessible. Version Management, although critical to long-term projects, would not have been fully exercised during this relatively short project. On the other hand, the most novel and pervasive new capability offered by typical open environments is an Object Base. The PCTE Object Base and OMS were used extensively. For this application, Metabase services were employed to extend the Metabase, which added new data type descriptions in accordance with data manipulation by the project tool set. Tool Classes From the viewpoint of PCTE, two important classes of tools are presented, as illustrated in figure 6. Native: Native tools are those designed and implemented with specific PCTE support. Such tools will ideally distribute inputs and outputs across many OMS objects, attributes, and relations in an effort to anticipate the information requirements of other tools that may later be added to the environment. To native tools, the OMS represents an elaborate storage system supporting interobject relations. Foreign: Foreign tools are those designed for use in another environment such as UNIX. Such tools expect inputs and outputs to appear in simple files. Foreign tools (such as UNIX tools) can be imported into the Emeraude environment by a process of encapsulation. The encapsulated tool still receives inputs and outputs from "files," but many of these files are now objects of type file (or some subtype of file) in the Object Base. The OMS allows the environment to record information about these files as attributes and relations without examining the internal file structure. The file objects themselves are treated as black boxes by the Emeraude environment. Emeraude provides two mechanisms for encapsulation. The first is to recompile the tool, substituting the Emeraude I/O library for the "conventional" UNIX I/O library. This substitution provides a set of I/O operations with signatures that are identical to the file-handling primitives of UNIX, but that actually open, close, read, or write objects of type file in the PCTE Object Base. The second mechanism, which is useful for tools that receive their file names via their command line invocation, is to wrap a simple Emeraude shell script around the tool invocation. Within that script, the command line parameters are processed by a special Emeraude command to convert the logical paths to file objects into the actual UNIX path names where the Object Base has located the particular file objects. Path names can then be read and/or written using the normal UNIX primitives. The tool set used for this study consisted of foreign tools. Because the source code was not available for these tools, the second encapsulation method was employed in this project. However, the CSDL CASE tool, as previously mentioned, was modified by the CSDL to use internal information. This information represented the data flow object. Although the encapsulation method was used with this tool, it had some of the characteristics of a native tool. The OMS Type System As noted earlier, all objects in the PCTE Object Base are typed. The type determines 1. Attributes that describe the object 2. Relations (links) the object may have with other objects. 3. Whether the object has contents (i.e., can we open and/or close it and apply read and/or write operations to it?) An attribute is a named value associated with an object. Attributes can be strings or numbers. A relation is a bidirectional link between two objects. Each direction has a different name. Both attributes and relations can be viewed as "properties" of the object. When that property is itself another object, it is a relation. When the property is a simple string and/or integer value lacking a separately addressable identity, it is an attribute. Types are related by inheritance, which means that if Sub is a subtype of Super, then all attributes and/or relations of Super are also available for objects of type Sub. The Emeraude environment comes with a number of predefined types. These types define OMS analogs of the following familiar concepts: **file** an object that has "contents" and can be written to and/or read from; has attributes: owner and modification date, among others **object_code** a subtype of file, intended to hold only object code **c_source** a subtype of file; to the usual file attributes and relations adds links to “include” files and other C language-specific information **object** the “root” of all types Figure 7 shows the inheritance relations that relate the predefined types. The inheritance hierarchy is important to determine the properties offered by objects of any given type. For example, any object has a name attribute; therefore, a file has a name as well. A file has contents; therefore, so does any c_source object. On the other hand, c_source objects have attributes and relations that are specific to c_source code and would not be applicable to general files. Because each object is typed, the environment and the tools running in that environment are aware of what attributes and relations are available for any given object. The environment can prevent the use of inappropriate attributes and relations with an object. Less obviously, the type system allows control of the visibility of objects, attributes, and relations. Each user has a *working schema*. The working schema is a list of object, attribute, and relationship types available to the user. Attempts to access an object, attribute, or relation whose type is not in the working schema will fail, just as if that object, attribute, or relation did not exist. Individual users and groups can be given or denied access to sets of types, thus given or denied access to objects of those types. **Type Schemas** Types are grouped into *Schema Definition Sets* (SDS). A type may appear in the SDS's. As new tools are brought into the environment, new kinds of input/output data employed by those tools must be described to the environment. An environmental description is accomplished by defining a new schema containing the data types needed by the new tool. As an example of this design process, consider the problem faced in this project of integrating the CSDL CASE and ADAS tools. This scenario called for software designs (data flow diagrams) from CSDL CASE to be combined with machine-characteristic information to produce a work load script to drive an ADAS simulation. This problem suggests an initial list of new types: a data flow diagram, machine characteristics, and a work load script. On closer examination, it was determined that CSDL CASE can represent data flow diagrams as directed graphs or as a text script, suggesting two more types: `dfd_graph` and `dfd_script`. The first step in defining these types was to organize them into an inheritance hierarchy, as shown in figure 8. Objects of type `dfd_script` and `dfd_graph` are produced as files by CSDL CASE; they have contents (i.e., we must be able to read and write them), so it makes sense that they should be treated as subtypes of `file`. Similar arguments hold for `machine_char` and `workload_script`. The notion of a data flow diagram (`dfd`) as a possible combination of graph and script is an organizational idea (for example, analogous to a directory). As such, this flow diagram has no contents of its own so it cannot be a file. After the inheritance hierarchy has been set, relations are added among the types. Relations serve both to add information about the objects and especially to enforce certain constraints: 1. For every work load script, there can be only one data flow diagram and one machine-characteristics file. 2. The same data flow diagram can be used to produce many different work loads (e.g., by varying the machine characteristics and/or number of concurrent tasks). 3. The same machine characteristics can be used to produce many different work loads (e.g., by varying the software data flow and/or number of concurrent tasks). 4. For any data flow diagram, there can be at most one CSDL CASE graph and at most one CSDL CASE script. The first three constraints can be seen in the schema shown in Figure 9. In this figure, the boxes denote types, and the triangles and diamonds denote relations that may link objects of the indicated type. A diamond is used when a name is assigned to each direction of the relation, and a triangle is used when a name is given to only one of the two directions. Thus, for example, from any work load object, one can follow a .script link to find the corresponding work load script, and from a work load script object one can follow a .script_for link back to its work load. ![Figure 9. Work load schema definition set.](image) Links that end in a single arrowhead denote a many-to-one link. Links ending in a double arrowhead denote a many-to-many link. Thus it is apparent from Figure 9 that a given work load has a single data flow diagram (via the .flow link); however, each data flow diagram can contribute to many work loads (by way of the .flow_for link). When both directions of a relation are many to one, the combination is equivalent to a one-to-one relationship. Thus, the relation between work loads and work load scripts is one to one. The fourth restriction is captured in the schema shown in Figure 10, which also illustrates an early decision that the environment might contain many different tools capable of building and manipulating data flow diagrams such as both the CSDL CASE and ADAS tools. The final step in developing schemas for describing tool interactions is to "decorate" the object types with attributes to help describe the objects and to make internal information available to other tools. Some of the attributes we employed for work load and dfd objects were: - **name**—an identifier inherited from the root type - **object** - **topology**—a name describing the general shape of a data flow graph - **num.tasks**—the number of duplicate tasks, each a complete instance of the software data flow diagram - **width, max.path.length, ...**—various attributes describing the shape and properties of the dfd graph Note that information such as the dfd and machine characteristics used with each work load is already available, but as relations, not attributes. By following relationship links and examining the attributes of the objects encountered, a variety of searches and retrievals can be performed. Emeraude has query and searching primitives (the Data Query Management Services), but those primitives are provided as a library of C routines. No interactive tool except a basic OMS browser is currently provided. InQuisiX, which was used for this purpose, is a reuse librarian tool that describes library units in terms of attributes and permits interactive searches for units that satisfy various constraints on those attributes. Many attributes defined for work loads were chosen to illustrate the processes of registering a work load in a reuse library and of permitting later searches and retrievals of those work loads. In such a situation, we would anticipate that some, but not all, of the useful information about the work load would be assigned by the tools that created the work load. This assignment by the tools is true of (1) the name, (2) the links to data flow diagram, work load script, and machine-characteristics files, and (3) the number of concurrent tasks. Other attributes, primarily those concerned with documentation, would be filled in by the reuse librarian when the object is cataloged for general use. Thus, in this case, it was necessary for PCTE attribute values to be sent to the InQuisiX library, and for any changes to object attributes made by the InQuisiX librarian to be reflected later as PCTE attributes. **Summary of Experience With Emeraude PCTE Version 1.5** The project was undertaken over a 10-week period from June 1, 1992 to August 7, 1992. This period was chosen to correspond with the summer on-site performance period of the JOint VEntures (JOVE) program funded by Marshall Space Flight... Figure 10. Set for dfd schema definition. Center, which sponsored one project member. The project, which was successfully completed within the allotted time, consisted of four major tasks: 1. Education (2 weeks): This task included installing and studying the capabilities of the Emeraude environment. Additionally, the target tool set was unfamiliar to the project team; therefore, part of this time was devoted to studying the capabilities of the two alpha-release tools and the commercial tool. 2. Demonstration definition (3 weeks): During this time the functions to be demonstrated (software reuse, CASE, and architecture performance evaluation) were refined, and the demonstration process was defined. These refinements involved evaluating the data integration possibilities of the target tool set. The need for a CASE tool modification was recognized, and the specific modification was defined. This modification was implemented in a very short time by the developers of the CASE tool at the Charles Stark Draper Laboratory, Inc. 3. Implementation (3 weeks): The PCTE object types, relations, and schemas were developed with the tool encapsulation scripts and filters. 4. Evaluation (2 weeks): During the last 2 weeks, the coordinated design process, as represented in figure 3, was demonstrated. The project was also documented during this time. The fact that this project was completed within the 10-week period indicates the relative ease with which foreign tools could be encapsulated in the Emeraude implementation of PCTE. The facility of this effort was partly due to the orientation of the environment toward UNIX and the project team’s knowledge of UNIX. The current PCTE standard only supports large-grain data modeling at the object level. The internal structure of objects is treated as a black box by PCTE; thus, tools designed for manipulation of the contents of the objects must agree on format outside the modeling capabilities of PCTE. Nevertheless, the ability to model objects at the large-grain level and to define the relationships between them was valuable. The development of an explicit object data model clarified the role of each tool in the project’s engineering development process and the relationships between the tools. This model also enforced consistency and precision in the use of the information defined and manipulated by the tools. Although PCTE does not support the encapsulation of object types with behavior, it was not limiting for this particular project. Additionally, the availability of an external ASCII format for the internal contents of the data flow diagrams and the work load scripts allowed some fine-grain data manipulation through the development of simple filter programs that translated between formats. Most of this project was developed using the facilities of the Emeraude Shell Programming Language and Makefile facility. Programs in C were written only for the fine-grain data filtering. Because the three tools used in this project were developed by three different vendors with different window systems and approaches to user-interface management, each tool presented a distinct interface to its services. The availability of an open environment with infrastructure support for defining a user interface would contribute greatly toward providing a consistent "look and feel" for each tool. One major advantage of the PCTE Object Management System is support for transaction control. The user could define the beginning of an activity, manipulate a set of objects, then abort the activity and roll the system back to the original state. This facility eased the tasks of learning PCTE and correcting the software developed for the project because the Object Base could always be returned to a consistent state. In fact, part of the demonstration was performed as a transaction activity, then rolled back to return the system to the same starting state for subsequent demonstrations. Research Issues The study also considered several areas of future research, which include object management support with programming language interfaces, development environments for concurrent systems, and process modeling. Object Base Technology The object-oriented database (OODB) is a relatively new class of data storage that has not yet matured to the same degree as more conventional data base forms. Open issues affecting both the features and performance of OODB's include: type evolution, scale and granularity, inheritance of behavior, and efficiency. Many of these issues are discussed in references 18 and 19. The development of appropriate type systems for persistent object management in particular is complicated by the need to map persistent object types into types that can be processed by a variety of conventional programming languages; that is, the purpose is to achieve an interoperable type system. Most prior efforts to achieve interoperability have been directed at overcoming differences in machine and/or language representation of "equivalent" data structures. Approaches have included 1. Imposing a single data model: An example is the widespread adoption of the IEEE standard for floating-point number representation. By encouraging all vendors to comply with this model, interoperability of this data form is achieved. Unfortunately, by its nature this approach can only be achieved for a finite number of data structures. 2. Imposing a unifying data model: Data description languages such as the Interface Description Language (IDL) provide a uniform model for construction of new compound data structures from a small set of primitives (ref. 20). The combination of single data models for primitives (for example, numbers and characters) and a unified model for construction of compound structures is an important step toward achieving interoperability. There is another level, however, at which it is often more convenient to consider the issue: the level of the data abstraction implemented by a particular representation. Representation-level schemes provide only minimal assurance that a given data structure will be manipulated in an acceptable manner by users from distinct environments. In other words, the data are transferred, but the enforcement of the abstraction captured by that data is left to the good will and capabilities of the programmers in each environment. Programming Language Interfaces to Persistent Data Conventional data base languages have long been criticized for a lack of programming power and expressiveness as well as being far behind the state of the art in incorporating software engineering concepts into language design. On the other hand, traditional programming languages offer no support for persistence beyond the idea of a "file." For this reason, interest has been growing in "persistent programming" languages that merge the expressiveness of modern programming languages with support for persistent object stores (ref. 21). Most persistent programming languages organize persistent data into special "bulk" data types such as sets or relations (refs. 22 and 23). A smaller number of these languages have attempted to merge persistence support directly into a traditional language with little or no visible change to the language (ref. 24). Curiously, this minimally intrusive approach has seldom been employed with languages that offer rich support for data abstraction. An exception is Zeil's ALEPH project which adds persistence to Ada (ref. 25 and work done for Langley under NASA grant NAG1-439 and National Science Foundation grant CCR-8902918 at Old Dominion University Department of Computer Science). The ALEPH language preprocessor currently under development can serve as a vehicle for experimentation and distribution of these protocols by providing a simple interface to persistence and garbage collection for Ada programmers. **Environments for Concurrent Systems** Concurrent software design differs from sequential software design in several significant respects. A major difference between techniques involves the coordination between processes. In a concurrent system, processes must communicate, and the semantics of such interprocess communication can easily be utilized incorrectly by the implementation. The results are concurrency anomalies such as deadlock or corruption of shared data. The construction of concurrent system design tools requires a model of concurrent systems that lends itself to concurrent system design and exploitation of that model in a specification language that captures the model properties. Early work on models of concurrent systems emphasized avoidance of system execution anomalies (for example, deadlock) and guaranteed the correctness of shared-data access. Recent work focuses more on efficient forms of concurrency that can be derived by redefining correctness properties and allowing for varied and more complex interleaving of shared data operations. Early work focused on enforcing correctness criteria at the process level rather than the individual operation level; recent work emphasizes individual operations. One model, the general process model (GPM), provides a framework for the consideration of both system syntax (that is, the arbitrary forms of data objects and accesses) and semantics (that is, the meaning and effect of individual data objects and manipulations). This model is the basis for the development of environments for designing and analyzing concurrent systems. (For example, see ref. 26.) **Process Modeling and Management** The validation and verification of mission-critical computer systems must encompass not only the artifacts produced (for example, specifications, code, and designs) but also the process used to develop those artifacts. Process modeling refers to the definition of the set of activities that comprises the development process and the interrelationship of these activities. Process management refers to the use of a process model in controlling, measuring, and certifying the dynamics of development. The importance of process modeling for the development of complex computer systems has been recognized for some time (ref. 27), but interest in this area has increased rapidly over the past few years (refs. 7 and 28). An open environment can play a significant role in the management, enforcement, and documentation of the development process. Because a process model controls the set of activities that makes up the engineering development, this process is best embedded in a unified development environment in which all access to development resources can be monitored, recorded, and controlled. **Conclusions** The development and operation of complex computer systems will require computer-aided support throughout the system life cycle. The proliferation of computer-aided software engineering systems in the past decade is a testimony to the widespread need for automation technologies to support computer system development. Although the specific set of technologies, tools, and methodologies varies with application and state of practice, it is possible to identify an underlying infrastructure that provides the basic set of services necessary to support a unified system development environment. An environment such as the Portable Common Tool Environment (PCTE) provides this underlying set of services. This study investigated the role that an open environment could play in the development of mission-critical computer systems. A conceptual design scenario for the performance evaluation of parallel computer architectures that involved three diverse software tools was proposed. The tools were integrated using the emerging PCTE standard, and the design scenario was successfully demonstrated. The study demonstrated the feasibility of integrating a set of independently developed tools into a design environment and suggests that the current state of open environment standards, as represented by PCTE version 1.5, is sufficiently mature to warrant consideration in future implementations. This study also suggested future tool development should be undertaken within an open environment context and that existing tools should be migrated into that environment. This strategy would provide many opportunities for the integration of existing and proposed capabilities into a unified and manageable development process. References 4. Holcomb, Lee; Hood, Ray; Montemerlo, Melvin; Jenkins, James; Smith, Paul; DiBattista, John; De Paula, Ramon; Hunter, Paul; and Lavery, David: NASA Information Sciences and Human Factors Program—Annual Report. NASA TM-1291, 1991. A study of computer engineering tool integration using the Portable Common Tool Environment (PCTE) Public Interface Standard is presented. Over a 10-week time frame, three existing software products were encapsulated to work in the Emeraude environment, an implementation of the PCTE version 1.5 standard. The software products used were a computer-aided software engineering (CASE) design tool, a software reuse tool, and a computer architecture design and analysis tool. The tool set was then demonstrated to work in a coordinated design process in the Emeraude environment. This paper describes the project and the features of PCTE used, summarizes experience with the use of Emeraude environment over the project time frame, and addresses several related areas for future research.
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940011013.pdf", "len_cl100k_base": 8193, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 49989, "total-output-tokens": 10723, "length": "2e13", "weborganizer": {"__label__adult": 0.00033092498779296875, "__label__art_design": 0.0005016326904296875, "__label__crime_law": 0.0003132820129394531, "__label__education_jobs": 0.00144195556640625, "__label__entertainment": 8.291006088256836e-05, "__label__fashion_beauty": 0.00019180774688720703, "__label__finance_business": 0.0002884864807128906, "__label__food_dining": 0.00030875205993652344, "__label__games": 0.0006456375122070312, "__label__hardware": 0.002269744873046875, "__label__health": 0.0004794597625732422, "__label__history": 0.0004258155822753906, "__label__home_hobbies": 0.00013744831085205078, "__label__industrial": 0.0007243156433105469, "__label__literature": 0.00028443336486816406, "__label__politics": 0.00022935867309570312, "__label__religion": 0.000492095947265625, "__label__science_tech": 0.11883544921875, "__label__social_life": 0.00010848045349121094, "__label__software": 0.015960693359375, "__label__software_dev": 0.8544921875, "__label__sports_fitness": 0.00031638145446777344, "__label__transportation": 0.0011529922485351562, "__label__travel": 0.00022017955780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47898, 0.01493]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47898, 0.75578]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47898, 0.92475]], "google_gemma-3-12b-it_contains_pii": [[0, 440, false], [440, 1171, null], [1171, 5694, null], [5694, 8608, null], [8608, 12659, null], [12659, 15537, null], [15537, 17793, null], [17793, 20456, null], [20456, 22254, null], [22254, 25297, null], [25297, 29389, null], [29389, 32255, null], [32255, 37151, null], [37151, 41790, null], [41790, 46454, null], [46454, 47113, null], [47113, 47113, null], [47113, 47113, null], [47113, 47113, null], [47113, 47898, null]], "google_gemma-3-12b-it_is_public_document": [[0, 440, true], [440, 1171, null], [1171, 5694, null], [5694, 8608, null], [8608, 12659, null], [12659, 15537, null], [15537, 17793, null], [17793, 20456, null], [20456, 22254, null], [22254, 25297, null], [25297, 29389, null], [29389, 32255, null], [32255, 37151, null], [37151, 41790, null], [41790, 46454, null], [46454, 47113, null], [47113, 47113, null], [47113, 47113, null], [47113, 47113, null], [47113, 47898, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47898, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47898, null]], "pdf_page_numbers": [[0, 440, 1], [440, 1171, 2], [1171, 5694, 3], [5694, 8608, 4], [8608, 12659, 5], [12659, 15537, 6], [15537, 17793, 7], [17793, 20456, 8], [20456, 22254, 9], [22254, 25297, 10], [25297, 29389, 11], [29389, 32255, 12], [32255, 37151, 13], [37151, 41790, 14], [41790, 46454, 15], [46454, 47113, 16], [47113, 47113, 17], [47113, 47113, 18], [47113, 47113, 19], [47113, 47898, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47898, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
b9ab8a3e2be8dcdf5d35140ea637a5e92de3026f
[REMOVED]
{"Source-Url": "https://helda.helsinki.fi//bitstream/handle/10138/230859/Kettunen_et_al_2017_European_Journal_of_Futures_Research.pdf?sequence=1", "len_cl100k_base": 11233, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 46979, "total-output-tokens": 14136, "length": "2e13", "weborganizer": {"__label__adult": 0.00033164024353027344, "__label__art_design": 0.0003812313079833984, "__label__crime_law": 0.0002560615539550781, "__label__education_jobs": 0.0014286041259765625, "__label__entertainment": 7.045269012451172e-05, "__label__fashion_beauty": 0.00014281272888183594, "__label__finance_business": 0.0022373199462890625, "__label__food_dining": 0.00033855438232421875, "__label__games": 0.0006241798400878906, "__label__hardware": 0.0005092620849609375, "__label__health": 0.00034737586975097656, "__label__history": 0.00021779537200927737, "__label__home_hobbies": 6.157159805297852e-05, "__label__industrial": 0.0003325939178466797, "__label__literature": 0.0002224445343017578, "__label__politics": 0.0003247261047363281, "__label__religion": 0.00028777122497558594, "__label__science_tech": 0.007694244384765625, "__label__social_life": 6.99162483215332e-05, "__label__software": 0.008514404296875, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.00023066997528076172, "__label__transportation": 0.0004177093505859375, "__label__travel": 0.00017952919006347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66930, 0.02132]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66930, 0.15875]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66930, 0.92576]], "google_gemma-3-12b-it_contains_pii": [[0, 606, false], [606, 4492, null], [4492, 9205, null], [9205, 13933, null], [13933, 19528, null], [19528, 22791, null], [22791, 26129, null], [26129, 29749, null], [29749, 34702, null], [34702, 38909, null], [38909, 42560, null], [42560, 46476, null], [46476, 52218, null], [52218, 56404, null], [56404, 62057, null], [62057, 66930, null]], "google_gemma-3-12b-it_is_public_document": [[0, 606, true], [606, 4492, null], [4492, 9205, null], [9205, 13933, null], [13933, 19528, null], [19528, 22791, null], [22791, 26129, null], [26129, 29749, null], [29749, 34702, null], [34702, 38909, null], [38909, 42560, null], [42560, 46476, null], [46476, 52218, null], [52218, 56404, null], [56404, 62057, null], [62057, 66930, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66930, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66930, null]], "pdf_page_numbers": [[0, 606, 1], [606, 4492, 2], [4492, 9205, 3], [9205, 13933, 4], [13933, 19528, 5], [19528, 22791, 6], [22791, 26129, 7], [26129, 29749, 8], [29749, 34702, 9], [34702, 38909, 10], [38909, 42560, 11], [42560, 46476, 12], [46476, 52218, 13], [52218, 56404, 14], [56404, 62057, 15], [62057, 66930, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66930, 0.0684]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
734c96272e99b42bff37043b251fe6ee8a14c083
Transition of Governance in a Mature Open Software Source Community: Evidence from the Debian Case Bert M. Sadowski*, Gaby Sadowski-Rasters and Geert Duysters* * Corresponding author Transition of Governance in a Mature Open Software Source Community: Evidence from the Debian Case Bert M. Sadowski*, Gaby Sadowski-Rasters and Geert Duysters* Abstract As flourishing, productive open source software (OSS) communities mature, they have to introduce a variety of governance mechanisms to manage the participation of their members and to coordinate the launch of new releases. In contrast to other modes of governance of OSS communities, the Debian community introduced new mechanisms of informal administrative control based on a constitution, elected leaders and new functions attributed to interactive communication channels (like mailing lists or IRC channels) that can provide for community effects (and feedback). We show that these control mechanisms were introduced as a response to emerging innovative opportunities due the usage of source packages and heterogeneous learning processes by different groups within the Debian community. Keywords: Open Source Software community, Governance Mechanism, Debian community Jel codes: 030 UNU-MERIT Working Papers ISSN 1871-9872 Maastricht Economic and social Research and training centre on Innovation and Technology, UNU-MERIT UNU-MERIT Working Papers intend to disseminate preliminary results of research carried out at the Centre to stimulate discussion on the issues raised. --- 1 We have to thank Ray Dassen and Jeroen van Wolffenaar from Debian for their continuous support and their valuable inputs in this paper. All remaining errors, of course, are ours. 2 * corresponding author, University of Technology Eindhoven, PO Box 513, 5600 MB Eindhoven, The Netherlands, Tel: 0031-(0)402475510, Fax: 0031-(0)402474646, email: b.m.sadowski@tm.tue.nl 3 Municipality of Eindhoven, PO Box 90150, 5600 RB Eindhoven, email: g.sadowski@eindhoven.nl 4 * corresponding author, UNU-MERIT, Keizer Karelplein 19, 6211 TC Maastricht, The Netherlands, Tel: (31) 43 3884413, e-mail: duysters@merit.unu.edu 1 Introduction The continuing fascination with open source software (OSS) communities has led to an explosion in the number of volunteers working in Open Source Software (OSS) communities. The continuous growth of these communities in combination with the increased demands on the open software community has, however, created mounting problems for these same communities in terms of organization and governance. The traditional ways of organizing these communities have proved to be unable to cope effectively with these conditions of exponential growth. In OSS communities, the creation of new knowledge requires, on the one hand, a set of organizational rules and structures that allow critical evaluation of existing knowledge, innovation and rapid elimination of error (Kogut, 2000). On the other hand, the growing need of the open software community reduces the time available for the introduction of new releases while requesting a high quality of new releases (Michlmayr, 2004). Due to this dilemma, the organizational forms to coordinate and govern collaborative work have to be flexible and should be able to adapt easily to heterogeneous learning conditions within different groups in OSS communities. The Debian OSS community fits this general picture with the number of developers increasing from a sheer total of 60 in 1996 to over 9000 in 2005 and with the amount of source packages rising from 250 in 1995 to 10.869 in 2006. During this roughly ten years period, the growth of the Debian OSS community was accompanied by experimentation with different governance forms based on informal hierarchy after the original founder, Ian Murdock, left Debian in March 1996. Debian is a free Operating System (OS). It uses the Linux kernel (the core of an operating system), but most of the basic OS tools come from the GNU project (GNU is a recursive acronym for "GNU's Not Unix"); hence the name GNU/Linux. Debian is very similar to OSS projects like RedHat and SuSE whose Linux strategies focus primarily on the application of Linux for enterprises (e.g. Red Hat Enterprise Server, SUSE Linux Enterprise Server, Novell Open Enterprise Server/Linux, Novell Linux Desktop). As a respond to mounting organizational challenges Debian as well as other OSS communities like Apache or Linux, came up with new ways of organizing distributed work that differed from traditional work practices as experienced in professional organizations (Franke and von Hippel, 2003, Lee and Cole, 2003, Moon and Sproull, 2000). In contrast to other OSS communities, the Debian case shows that an OSS community can develop new governance mechanisms in the face of increasing technical and structural complexity from a “great person” in charge (Moon and Sproull, 2000) to informal administrative control mechanisms based --- 5 End February 2006 on a constitution, elected leaders and new functions attributed to interactive communication channels (like mailing lists or IRC channels) that can provide for community effects (and feedback) (Sadowski-Rasters, Duysters, & Sadowski, 2006). The Debian case shows furthermore that informal administrative control mechanisms are a way to foster heterogeneous learning processes within OSS communities. In the following we briefly characterize the theoretical discussion on changes in organizational structure and governance of OSS communities during their transition from the “going open” to the “growth” stage (Lameter, 2002). Afterwards, we focus on describing different governance mechanisms in the Debian OSS community after the initial founder, Ian Murdock, left the project in March 1996. In using data triangulation, the analysis utilizes a variety of data sources to characterize perspectives of different stakeholders on the governance forms within the Debian OSS community. In this piece we try to answer our main research question of how alternative governance mechanisms have revolutionized an OSS community such as Debian. Finally, we conclude with a brief discussion of our findings. 2 Mature OSS Communities and their Governance Forms Open source software (OSS) communities are characterized by distinctive features such as a) a shared common interest of members communicating through the Internet without face-to-face contact (Hertel et al., 2002, Rheingold, 2002); b) active pursuing of collective innovation and production processes (Hemetsberger, 2002); c) members bound together by shared as well as complementary expertise, which makes it possible to manage complex projects (Hertel et al., 2002); and d) are based on reciprocity on the group level as individuals adding code (or providing for other activities) to the group project, receive something from the group in return (for instance other code or bug reports). In contrast to collaborations, OSS communities are less restrictive in their access policy, relying on referral or reputation and develop a more specific community code including sanctions for violating this code. Furthermore, they are less flexible compared to collaborations with respect to change of members in the community. Compared to project based teams OSS communities are less clearly defined and less stable with respect to boundaries, functions, roles, and norms. They are more similar to "communities of practice" (Wenger & Snyder, 2000) which emerge based on "informal and self-organizing" mechanisms and "benefit from cultivation". However, to sustain these "communities of practice", they have to managed (Wenger & Snyder, 2000). For OSS communities, a critical growth stage is reached at the moment they are moving from the project initiation stage to the stage of "going open" (Rasters, 2004, Schweik and Semenov, 2003). This stage of "going open" can be seen as being critical in determining whether the OSS project will grow further, reach stability or decline. The main challenge for OSS communities has always been to find an appropriate governance form for this new stage of the OSS community. In this paper we aim to shed more light on this transformation process. Within organization theory, governance has been characterized as a toolbox for control, supervision and monitoring economic activity. It is aimed at achieving motivation and convergence of different objectives between all members of a group (Ouchi, 1979). Organizational life cycle theorists have shown that the internal structure of organization is changing by going through different growth stages (introduction, growth, maturity or decline). At these stages, appropriate governance mechanisms have to be found that can deal with increasing technical and structural complexity otherwise organizations decline. This discussion, rooted in original contributions by Blau (1970 and Woodward (1965), has shown that organizations cope with increasing technical and structural complexity by increasing differentiation and formalization as well as by employing a larger administrative component. In coping with growth, OSS communities have deployed a wide variety of differentiated task structures with different degrees of formalized technical as well as administrative structures. The formalization of the technical and administrative structures has been driven by the needs within the OSS community to explore and exploit knowledge leading to a parallel code structure of the open source software project (Lee and Cole, 2003). The evolution of these tasks structures and formalized structures also required different forms of governance within OSS communities. OSS communities have struggled most with the increasing complexity of the software and the explosion in the number of contributors to the community. This makes coordination in OSS communities a critical issue that separates successful from un-successful communities. To deal with the increased need for coordination within OSS communities, Demil and Lecocq (1999) have shown that the bazaar structure, i.e. a "great babbling bazaar of different agendas and approaches" (Raymond, 2001), can serve as a new emerging mode of governance within the OSS community (Demil and Lecocq, 1999, Raymond, 2001). Even under conditions of very high uncertainty, the bazaar mode of governance assures coordination based on reputation effects that are induced by the community phenomenon. However, in the face of increasing technical and structural complexity of OSS communities, the bazaar mode of governance does not prove to be efficient enough to account for the increased need for administrative (informal as well as formal) control mechanisms and provides less incentives for effective production compared to other modes of governance. As a result, a number of mixed forms of bazaar governance have emerged ranging from quasi-hierarchical (Linux) to (kind of) centralized (Apache) approaches (Demil and Lecocq, 1999). For an overview about these different modes of governance of OSS communities see Lynne Markus et al (2000). As we will show below, a unique mixed approach of bazaar governance has been developed within the Debian OSS community. 3 Governance mechanisms in transition: The Debian OSS community 3.1 Characterizing the Debian OSS community The Debian OSS community has experienced a rapid growth since its establishment in 1993 by Ian Murdock involving currently more than 900 volunteer package maintainers. However, the Debian OSS community differs from others because the programming work within the community is not concentrated on producing code, but on integrating code into a coherent system. In this respect, Debian is more in line with Red Hat, SUSE and Mandriva than with the Linux kernel, Apache and Mozilla (Bauer and Pizka, 2003, Gonzáles-Barahona et al, 2004, Narduzzo and Rossi, 2003). Therefore, two separate code structures (trees) that are running in parallel can be identified (a stable and a more experimental version of Debian software) but vital has been the integration of both trees. The stable version of Debian has been focused on the package system (dpkg). The experimental version served as a test bed for new features of (public) releases of Debian. This focus on integration of code has also been important to understand the emerging different task structures within Debian compared to other OSS communities. The task structure of the Debian community has been focused around a “core” which consists of the Debian project leader (DBL), developers as well as a “periphery” of maintainers. As the “core” is responsible for the production of new code, the periphery deals with the integration of these codes for particular applications. This structure differs from other OSS communities like Linux (Lee and Cole, 2003) as it is sometimes difficult to draw a line between them.\(^6\) Examining the specifics of the code structure used by the Debian OSS and the evolving task structure is essential to the understanding of the development of the different informal governance forms within the Debian OSS community.\(^7\) (For information on the methodology used see Appendix 1) ### 3.2 Organizational Growth and the Emergence of Informal Forms of Governance **The Project Initiation Stage** In the project initiation stage, OSS projects commence because one or more people realize that there is a computing-related problem or challenge left unfilled, and for one or more reasons, they decide to take it on (Godfrey and Tu, 2000). Here the “itching problem” described by E. Raymond comes into play: “every good work of --- \(^6\) However, there is a spectrum between integrators and code producers rather than a clean line of separation. For instance, many Debian developers are involved in troubleshooting other projects’ code, writing patches and "upstream" work. Similarly, Red Hat employs the key developer of the GNU project’s C library and Novell employs key GNOME, mono developers and kernel developers specializing in particular hardware platforms. \(^7\) It furthermore is important to know that the Debian OSS community has not been influenced by strategies of sponsoring companies. Other OSS communities are (still) operating in other market segments (like Ubuntu in the desktop market and in the individual user segment) or specific markets (like Mandriva) and do not software starts by scratching a developer’s personal itch.” (Raymond, 1998). At that point it is important to reach programmers who think along with this new initiative. Motivation, "the kernel," and a modular design are three important components of this stage of an OSS project (Schweik and Semenov, 2003). Even if there is an increasing number of studies that have focused on the motivation of programmers to take part in OSS communities (Hertel et al, 2003), the motivations of the initiators to start up a new project have just recently received some attention in the literature. The second component in the initial stage is related to the importance of an initial product for others to build upon — what has been called the project core, or the kernel. The initial project kernel has to show some promise, in order for other virtual members to join in. The third critical component is a good design and the concept of modularity. Modularity allows programmers to work in parallel. This modularity also enables the project leader to keep better control over the project when the work progresses (in complexity)(Rasters, 2004). These three components can also be found in the initial phase of the Debian OSS community. The Debian project was started by Ian Murdock from scratch after being dissatisfied with the SLS (Softlanding Linux System) release. Ian Murdock wanted to “draw a few people out the have (yet) an extensive support organization as Debian (or Red Hat or Novell) already provide. woodwork”, and had put down a request for comments, suggestions and advice. He made clear that he was developing an initial product for others to build upon. In 1993, when Ian Murdock decided to start an Open Source distribution that would always be free, he found a group of like-minded people to work with him. The stated goal was to create a complete operating system that would be ‘commercial grade’ but not, in itself, commercial. Ian Murdock posted his intentions to the Usenet in August of 1993 and immediately found outside interest, including that of the Free Software Foundation, the creators of much of the core software of all Linux-based systems. Murdock credits this early interest as being pivotal to the acceptance of Debian into the free software world. Murdock posted his announcement in order to try and reach out for a small group of motivated individuals who had ideas for the project. Or as Varghese puts it: “In 1993, when Ian Murdock decided to start an Open Source distribution that would always be free, he found a group of like-minded people to work with him. The question of freedom was important to Murdock (...). It started as a small, tightly-knit group of free software hackers, and gradually grew to become a large, well-organized community of developers and users (Varghese, 2003). The foundation for the parallel code structure were already laid down during this period leading to the (public) releases of Debian and the (rudimentary) package system called dpkg. The “Going Open” Stage In order to enter the going open stage, OSS communities face certain challenges such as achieving project and product credibility, developing adequate communication mechanisms, creating effective recruitment strategies as well as the development of appropriate forms of governance. To achieve project and product credibility, the project needs to obtain support from a number of enthusiastic "core developers", to show some "plausible promise" (i.e., a high development potential of the kernel in conjunction with an existing enthusiastic programmer community of high reputation), to attract interest from programmers due to its innovativeness, to have some importance while allowing a (future) large number of developers to participate, and to demonstrate that the right amount of the problem has already been solved before the project becomes "open." (Schweik and Semenov, 2003). In order to develop appropriate communication channels different internet based forms of communication are exploited ranging from “free form” discussions (e.g. mailinglists, IRC channels), to strongly structured discussions (e.g. bug tracking systems or trouble ticketing at helpdesks), to knowledge based discussions (e.g. wiki platform). To create effective recruitment strategies, the initiator has to choose a platform for announcing the project that has the potential of reaching as many readers as possible. When “going open” similar challenges were facing the Debian OSS community when Ian Murdock felt that Debian software was ready to be shared. He made the official announcement on the Internet, and encouraged others to help him to improve it. On September 2nd Murdock officially announced the Debian project. This announcement was made on the same Linux newsgroup (c.o.l.a = comp.os.linux.development newsgroup) he also re-posted his two earlier postings about Debian. However in this official posting he released the name of the Debian mailinglist which should be used for the project. Ian Murdock decided to follow the Open Source Developers licensing principles; he made the decision to follow the GNU and receive a General Public License (GPL). Debian GNU/Linux is a strong supporter of free software. Since many different licenses are used for software, a set of guidelines, the Debian Free Software Guidelines (DFSG) were developed to come up with a reasonable definition of what constitutes free software. Only software that complies with the DFSG is allowed in the main distribution of Debian. The Debian developers of the Debian GNU/Linux system have also created the Debian Social Contract. The DFSG is part of the contract. Initially designed as a set of commitments that they agreed to obey, they have been adopted by the free software community as the basis of the Open Source Definition. The Debian 0.91 release gave a first glimpse of the Debian philosophy. By this time, a dozen or so people were involved in development, though Ian Murdock was still largely packaging and integrating the releases himself. After this first public release of Debian, attention was turned toward developing the package system called dpkg. A rudimentary dpkg existed in Debian 0.91, but at that time this was mostly used for manipulating packages once they were installed, rather than as a general packaging utility. By Summer 1994, early versions of dpkg were becoming usable, and other people besides Ian Murdock began joining the packaging and integration process by following guidelines that explained how to construct packages that were modular and integrated into the system without causing problems. By Fall 1994, an overloaded Ian Murdock, now coordinating the efforts of dozens of people in addition to his own development work, transferred responsibility of the package system to Ian Jackson, who proceeded to make many valuable enhancements, and shaped it into the current system. After months of hard work and organization, the Debian Project finally made its first distributed release in March 1995, Debian 0.93 Release 5. Debian 0.92 had never been released, and Release 1 through Release 4 of Debian 0.93 had been development releases made throughout Fall and Winter 1994. These development releases had the function to experiment and to further improve on public releases as they were used as a learning device. To account for this experimental tree of development and to include new innovative opportunities, the Debian OSS community has developed later a whole cycle of releases ranging from an ‘unstable’ over a ‘testing’ to a ‘stable’ package. Table 1 provides an overview of Debian releases and major events during this second phase. ---------------------- Insert Table 1 about here ---------------------- As can be seen in Table 1, since 1995 the steady growth in the number of packages has been accompanied by an increase in the number of developers in the Debian community. By this time, the Debian Project, as it became known, had grown to include over sixty people. In the summer of 1995, Ian Murdock transferred responsibility of the base system, the core set of Debian packages, to Bruce Perens, giving him time to devote to the management of the growing Debian Project. Work continued throughout the Summer and Fall 1995 to come up with a final all-out binary format release, Debian 0.93 Release 6, was made in November 1995 before attention turned to converting the system to the ELF binary Ian Murdock left the Debian Project in March 1996 and Bruce Perens assumed the leadership role, guiding the Project through its first release (called “Buzz” or Debian 1.1) in June 1996. During his leadership period, the Debian Social Contract was ratified by the Debian developers in 1997 which included the Debian Free Software Guidelines (DFSG) and provided the Open Source Definition for the Debian community. As the DFSG provided guidelines on what constitutes free software in the Debian context, new members had to agree with the Debian Social Contract and the DSFG in order to join the Debian OSS community. The successor of Bruce Perens, Ian Jackson, the first elected Debian project leader (DPL), had major influence on formalizing activities within the growing Debian community that lead to the Debian Constitution which was in 1998 approved by a voting procedure. As shown in Figure 1, the Debian Constitution was a first attempt to define different roles (e.g. the DPL, the Technical Committee, and Developers) in a form of hierarchy within the Debian community (Garzarelli and Galoppini, 2003). The role of the coordinator was assumed by the DPL. He helped to define the project’s vision, lent authority to Developers and made any decision that requires urgent action. The Leader also represented Debian the Project to the outside world (e.g., by attending conferences and gives talks). All Debian Developers could vote to elect the Project Leader. Still, the developers, which are at the bottom of this hierarchy, could override any decision taken by the Project Leader or the Technical Committee. Furthermore, the Constitution did not impose any obligation on anyone to work continuously on the Debian project; in fact, a contributor could leave the project at any time or resign from his or her position or duty by a simple announcement. During the period 1996 and 1999 there were three more stable releases, which were provided by Debian developers and maintainers. Within the Debian community, a task structure had developed in which certain developers (including the DPL) contributed to new releases even if they were sometimes not directly linked to a particular package and maintainers that were taking an existing open software packages and create a ready-to-install Debian package (Robles et al, 2005). In 1999, Debian entered the phase in which the community became really concerned about the quality of maintainers joining the project. There was even a hold on accepting new maintainers. A crisis occurred when the Debian community no longer felt that it could adequately protect its boundaries and closed its doors to new potential members. As the acting DPL Wichert Akkerman at that time observed: "I have to acknowledge that Debian has reached the point where it has grown too much and cannot continue as before. At the moment we already have chaos all over with no proper leadership. Only very few people are taking care of general management tasks. Remember this is an association of more than 500 people. There is still no proper management. Guess what would have happened if it were a company...” This led to the constitution of the New Maintainer Process and the articulation of membership criteria and a process, thereby institutionalising the openness of the Debian project. The Debian New Maintainer process is a series of required proceedings to become a Debian developer or maintainer. It comprises a registration process of New Maintainers (NM) that is handled by the NM-Committee, which is a body of people who control the New Maintainer process. It is composed of the Front Desk, the Application Managers, and the Developer Accounts Managers. The Front Desk officers receive new application requests and pass them to appropriate Application Managers. The Application Manager is a Debian developer who is assigned to an Applicant in order to monitor their progress through the application process. Developer Accounts Managers (DAMs) manage user accounts on Debian machines, and finalize the details of membership by assigning accounts to new developers. The DAMs are delegates appointed by the DPL (see Figure 1). The new maintainer approach has been a way of keeping Debian open, but at the same time, a way to manage its boundaries. It defined a new governance structure by providing a mechanism for managing membership that allowed to evaluate whether (or not) new member’s skills, goals, and ideology were in line with that of the community (O'Mahoney and Ferraro, 2003). From 1999 onwards there were three other releases, however, there was a gap of three years between the 3.0 release in 2002 and the last Sarge release in 2005. The Growth Stage As Schweik and Semenov (2003) observe, open source projects can grow at this stage based on new membership. They can remain stable relying on the same number of participants as in the going open stage, or they gradually might decline due to a lack of interest of participants (Schweik and Semenov, 2003). The willingness of participants to continue their cooperation in a particular project is related to past progress in areas such as project and product credibility, the development of adequate communication mechanisms, the creation of effective recruitment strategies as well as the development of an appropriate institutional and governance design. As has been shown in Table 1, from its initiation phase to the growth phase the Debian project was developing rapidly from only a few developers into a large community. During this growth the community found ways to cope with this expansion, mainly by streamlining and coordinating communication. By providing for reciprocity and reputation, communication processes were streamlined and coordinated by using, in particular, the various Debian mailing system. The Debian mailing system evolved over the years by continuously including new specific topics lists such as Users, Developers, Internationalization and Translations, Ports, Miscellaneous Debian, Linux Standard Base and Software in the Public Interest. These lists were coordinated by the mailing list maintainer. As one participant described it: “The language on the list is very high tech programming language, a work-do-not-chat-mentality. Many people work behind the scenes and you do not often see them at the mailinglists. However, when they are there, they speak with great authority.” Within the Debian project mailing lists fulfilled three different functions (Lanzara and Morner, 2003): First, as virtual construction sites they were used to continuously create, update, modify and repair software constructs; second, as some sort of electronic crossroad they were used to exchange information and problems as well as discuss solutions, and third, as a form of weblog they recorded the history of the Debian OSS community. The mailinglists allowed unrestricted access to discussions, allowed knowledge circulation and have been a means to structure the communication within the Debian community. At the same time they allowed dissemination activities of the Debian project to take place quasi-automatically, because documentation of built software products or solutions can circulate throughout the web almost instantaneously. The dissemination process has been linked to the development activity, and has been embedded in the Internet-based information and communication structure. As a result of these new functions, mailinglists were considered as a new mechanism of governance within the Debian OSS community (Lanzara and Morner 2003: 37). A continuous problem of management of the Debian OSS community has been the slow release cycle of Debian. The Debian project had often to defend itself on this matter. The Debian community has always been proud of the fact that it will not release buggy software, and will release only when the software has been stable. Within the Debian OSS community, the Debian project leaders developed their own leadership style to deal to problems of slow release management and for the growth of the community as a whole. As Table 2 shows, since 1993 the Debian project has been headed by a number of leaders with very different leadership styles. There have been experiments in leadership style. At the beginning when there were only a few people involved in the Debian project, strong leadership was accepted. However, other styles of leadership were used by new Debian project leaders to deal with increasing structural complexity of the Debian community.\footnote{In discussing the leadership qualities of former project leaders (Ian Murdock, Ian Jackson), Wichert Akkerman characterized new challenges emerging from the differentiated task structure in his leadership speech as follows “I do not intend to be as dictating and vocal as Bruce was, but neither as silent as Ian was the last year. Both have done a good job, but things are not what they were. Debian has grown to be too big for Bruce's style of leadership, and Ian has laid a great foundation for a new period by giving us the constitution. This also means the role of project leader is now very different: most functions have been delegated, leaving the leader to act as a kind of benevolent overseeing person who nudges the project in a good direction.”} This was the point in 1996 when leadership elections were arranged by the project leader secretary. The ways in which elections were organized also grew over time, from simple plain text mission statements on personal election platforms to election debates on IRC channels. ----------------------- Insert Table 2 about here ----------------------- Ian Jackson led the Debian project from January 1998 until December 1998. This was the point in time when the project leaders became elected. The enormous growth of the community prohibited informal ways of transferring leadership. Ian Jackson tried, together with the community to “fit the governance structure” to the size of the community and to the feelings of freedom that lived in the community. Ian Jackson had major influence on how the Debian project become structured with respect to writing the constitution, election methods and the description of leadership models. In 2000, the leadership debate and a speech of the opponents was introduced in the election. The debate was held on Tuesday, February 15, 2000 at 1900 UTC, at the irc.Debian.org on channel #Debian-debate. This is an a-synchronous chat channel, where everyone could log in. The format of the election was as follows: 24 hours before the debate each of the candidates e-mailed an ‘opening speech’ to the debate organizer, Jason Gunthorpe. They were then placed on this page. Everything was added at the same time to ensure fairness. The actual debate had two parts. First, a strongly moderated traditional debate: The moderator asked a candidate a question. The candidate then had a reasonable period to answer. After the answer each of the other candidates responded in turn. The first candidate was allowed to make closing remarks on the question. The order of the candidates was rotated for each question. The second part of the debate was more freestyle. Questions submitted by the audience and developers were asked. Each candidate got a short period to respond. After the debate a log of the debate was posted, so voters could read everything at their own pace. In the leadership elections of the year 2005 a major difference with previous leadership elections emerged. The call for more team-based leadership approaches The year 2005 has been a very interesting one in the evolution of the Debian community. The Debian GNU/Linux version 3.1 codenamed "Sarge" was released after nearly three years of continuous development. Within the Debian community, criticism increasingly mounted about the slow release management cycle of the project. Within the leadership elections, the slow release management and the growth of the user community were considered as "hot" items among candidates running for election even if this issue had already intensely been discussed in previous elections. Interestingly, the candidates running for election presented this time new solutions to these critical issues. They suggested a whole new approach towards leading the Debian project. The election platforms of two running candidates Brandon Robinson and Andreas Schuldei suggested forming a small formal team of Debian developers aimed at supporting the project leader. This team, nicknamed "Project Scud", was organized in the last few weeks of 2004. Brandon Robinson, who became in 2005 the new DPL proposed “a new approach to Debian Project leadership” in which he, Jeroen van Wolffelaar, Andreas Schuldei, Enrico Zini, 9 During the 2005 elections, candidates with own platform were M. Garrett, A. Schuldei, A. Lees, A. Towns, J. Walther and B. Robinson. Steve Langasek, and Ben Garbee, formed the ‘Project Scud’, i.e. “a team of concerned Debian Developers who have resolved to take some new approaches to resolve long-standing problems within the project”. According to Scud members having a DPL team would allow them to distribute the workload, avoid burnouts and problems related to real-world unavailability of individual developers. In previous election platforms it became obvious that candidates running for election favored specific tasks more than others even if they were related to the function of a DPL. While being part of the DPL team it was possible to micro-delegate tasks to the most appropriate person. The Scud team identified small teams (up to seven people) as probably the single most important unit for the Debian project to grow in a healthy way. If the team would function well it could solve more problems than individual developers. The team should be able to provide a smooth entry point for new developers to gain proficiency and develop skills. Furthermore, teams should be the place where developers can get to know each other quickest and best (due to the small number of people in the group). Another advantage proposed by the Scud team was that people could form a knowledge pool when cooperating on package maintenance, infrastructural or organizational tasks, and it was less likely that --- 10 The name Scud was meant to be an internal code-name inspired by the dog named Scud in “Toystory”. After the elections the team was operating under the name “DPL team”, however Debian members referred to it as “Scud”. such pool would get lost compared to the knowledge and skills lost if a single developer is departing. This would make Debian more resilient against unmaintained packages or head hunters. As these teams could grow and divide, they were considered as self-organizing and would provide for very good scalability in numerical growth.11 While the members of the Scud team have been enthusiastic about their new ideas, there has been some controversy within the Debian community about the Project Scud, which has also been referred to as a self-appointed group of advisors to the DPL. The Scud proposal has been a source of some concern, especially how it would integrate within the Debian constitution and the existing organizational structure.12 The discussion on the mailinglists shows that members of the Debian community got confused by the DPL team idea. They argued that the DPL can always delegate tasks to other members of the project and therefore the argument of Scud --- 11 An example of team-based work being organized in the Debian project was provided by Andreas Schuldei who argued that the Debian project needs more frequent, regular releases since the present delays cause frustration and a decline in morale in the Debian community. To pave the way for a smoother development cycle and release process he took the initiative to organize a team-based meeting of the release team and FTP-masters. 12 Some members have become more concerned about the constitutional implications of the Scud team, since the Debian Constitution does not define the DPL’s function as a team. It only defined the DPL’s function, that of the Project Secretary, the Technical Committee, of Delegates, and of the Project’s Developers. By excluding bodies that are of no relevance to the DPL’s position, there are only two options: First, the members of Project Scud (other than the DPL himself) do not actually have any real power, except that the DPL will supports them if any of their decisions are challenged (thus, their power will only exist de facto); second, the members of Project Scud (other than the DPL himself) will be formally appointed as delegates (thus, they will have real power, backed by the Constitution). members that it is impossible for a single DPL to have time to do everything is not valid.\textsuperscript{13} One main argument against the Scud team has been that a DPL team should not be a subset of members, but should be open to everyone. Basically there should not be any issue that could not be discussed within everyone. Debian members felt offended by the idea of private meetings between Scud members.\textsuperscript{14} Further question marks have been placed by Debian community members as to whether or not the creation of a small team increases Debian’s transparency or even worse diminishes the openness of the overall Debian project. There have been great concerns from members about attempts to formalize the Scud team. With the upcoming leadership elections it has been time for reflections about the working of the DPL team. Jeroen van Wollfelaar, now one of the running candidates, explains that during this whole year the Debian community was divided on the issue of the Scud team. In general, the community kept a somehow wait-and-see attitude. To his disappointment the DPL team did not work as expected.\textsuperscript{15} However, currently it has not been clear whether or \textsuperscript{13} “Why can't the DPL simply immerse in the developer community and consult with individual developers, or all of us, depending on the challenge at hand? Why the need for a closed council, which will surely employ closed means of communication among its members? Why not consult in public so we all know how our project is actually being led?” \textsuperscript{14} This issue of private meetings came upfront during the Vancouver Meeting discussion 2005, at which a small group of ftp master gathered in a private face-to-face meeting. \textsuperscript{15} “First, because the team had no official status and the chosen DPL did not give the team the priority it deserved. Robinson liked the idea, but was not an not the Scud team will be established as something permanent within Debian’s governance structure. enthusiastic proponent of the team approach. He lacked the leadership skills to lead the team in an effective manner. There have been Scud meetings, and to a certain extent they were useful, but it was not so that the Scud fulfilled DPL functions. These functions still were carried out by the project leader himself." **Summary and Discussion** OSS communities evolve through several different phases; i.e. introduction, growth, maturity or decline. The “going open” stage has generally been considered as critical to OSS communities in deciding whether or not these communities will face further growth, maturity or decline. To facilitate the adaptation of OSS community during these different growth stages, a wide variety of differentiated task structures with different degrees of formalized technical as well as administrative structures have emerged. As the evolution of different task structures has been rooted in heterogeneous processes of learning, the formalization of the technical and administrative structures has been driven by the needs within the OSS community to explore and exploit knowledge. The evolution of different governance forms has therefore to be considered in the context of these task structures as well as technical and administrative structures. In exploring the different stages in the development of OSS communities, the paper has linked the evolution of different informal governance forms within the Debian OSS community to the particular parallel code structure utilized and the task structure within this community. Even if separate code structures running in parallel can be identified within the Debian OSS community (i.e. a stable and a more experimental version of Debian software), the integration of both structures has proved to be vital. The task structure of the Debian community differs from other OSS communities like Linux (Lee and Cole, 2003) as the boundaries between core and periphery have been more difficult to trace. Even if the distinction between “core” around the Debian project leader and developers as well as a “periphery” of maintainers can be made. The specifics of the code structure used by the Debian OSS and the evolving task structure has provided an understanding of the development of the different informal governance forms within the Debian OSS community. The emergence of an elected leader in conjunction with a project leadership team provides new evidence for the need to search for novel and alternative forms of governance of OSS communities. In the face of growing structural and technical complexity, they provide a solution to the dilemma of OSS communities during the “going open” stage of their development. References Ronneburg, F. 2006 *Debian GNU/Linux Anwenderhandbuch: Creative Commons Namensnennung*. Varghese, S. 2003 Living Up to the Linux Name. *The Age*. ### Table 1: New releases and important events in the Debian History (1993 – March 2006) <table> <thead> <tr> <th>Timeline</th> <th>Release</th> <th>Package System dpkg</th> <th>Packages</th> <th>Developers</th> <th>Events</th> </tr> </thead> <tbody> <tr> <td>Fall-Winter 1993</td> <td>Several Internal Releases</td> <td></td> <td></td> <td></td> <td>Founder Ian Murdock</td> </tr> <tr> <td>January 1994</td> <td>Public Release of Debian 0.91.</td> <td>Rudimentary dpkg</td> <td>Small</td> <td></td> <td>Ian Murdock still largely packaging and integrating the releases himself Rudimentary packing system used for manipulating packages</td> </tr> <tr> <td>Summer 1994</td> <td>Usable early versions of dpkg</td> <td></td> <td></td> <td></td> <td>With early versions of dpkg and guidelines explaining how to construct packages other people besides Ian Murdock join packaging and integration.</td> </tr> <tr> <td>Fall 1994</td> <td>Responsibility over dpkg (I. Jackson)</td> <td></td> <td></td> <td></td> <td>Responsibility of the package system is transferred to Ian Jackson</td> </tr> <tr> <td>1995</td> <td>First distributed release (Debian 0.93 Release 5)</td> <td>250</td> <td>60</td> <td></td> <td>It now is called The Debian Project.</td> </tr> <tr> <td>Summer 1995</td> <td>Responsibility over base system (Perens)</td> <td></td> <td></td> <td></td> <td>Ian Murdock transfers responsibility of base system (core set of Debian packages) to Bruce Perens, he still is responsible for Debian management.</td> </tr> <tr> <td>March 1996</td> <td></td> <td></td> <td></td> <td></td> <td>Ian Murdock leaves the Debian Project in March 1996; Bruce Perens assumes leadership role.</td> </tr> <tr> <td>June 1996</td> <td>1.1 (Buzz)</td> <td>474</td> <td>90</td> <td></td> <td></td> </tr> <tr> <td>End 1996</td> <td>1.2 (Rex)</td> <td>848</td> <td>120</td> <td></td> <td></td> </tr> <tr> <td>1997</td> <td>1.3 (Bo)</td> <td>974</td> <td>200</td> <td></td> <td>Debian Social Contract including Debian Free Software Guidelines (DFSG) and Open Source Definition</td> </tr> <tr> <td>1998</td> <td>2.0 (Hamm)</td> <td>1500</td> <td>400</td> <td></td> <td>Debian Constitution ratified by vote (constitution includes election methods, leadership debate), first elected leader Ian Jackson</td> </tr> <tr> <td>1999</td> <td>2.1 (Slink)</td> <td>2250</td> <td>410</td> <td></td> <td>Freeze on accepting new maintainers. Constitution of the New Maintainer process</td> </tr> <tr> <td>2000</td> <td>2.2 (Potato)</td> <td>3900</td> <td>450</td> <td></td> <td></td> </tr> <tr> <td>2002</td> <td>3.0 (Woody)</td> <td>9000</td> <td>1000</td> <td></td> <td></td> </tr> <tr> <td>2005</td> <td>3.1 (Sarge)</td> <td>10869</td> <td>&gt; 9000</td> <td></td> <td>Leadership elections within a new format, Discussion about a Debian Project Leader (DPL) team</td> </tr> <tr> <td>no release date yet</td> <td>(Etch)</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Source: (Lameter, 2002) and own information Table 2: Informal hierarchical forms of governance within the Debian community (1993 – 2006) <table> <thead> <tr> <th>Phase in Debian history</th> <th>Year</th> <th>Project Leader</th> <th>Leadership Characteristics*</th> <th>Informal hierarchical forms of governance</th> </tr> </thead> <tbody> <tr> <td>Initiation and Going open stage</td> <td>1993 – March 1996</td> <td>Ian Murdock</td> <td>“Visionary”</td> <td>Founder, Open community</td> </tr> <tr> <td></td> <td>April 2001 – April 2002</td> <td>Ben Collins</td> <td>“More visibility” as a leader</td> <td>Elected</td> </tr> <tr> <td></td> <td>2003 – 2004</td> <td>Martin Michlmayr</td> <td>“Motivator and internal coordinator”</td> <td>Elected</td> </tr> <tr> <td></td> <td>2005</td> <td>Brandon Robinson</td> <td>“Coordinator”</td> <td>Elected, Discussion about a Debian Project Leader (DPL) team, Leadership elections in new format</td> </tr> </tbody> </table> (* quotes refer to leadership characteristics used to describe these leaders in leadership speeches or interviews with participants) Source: Based on own information. Figure 1: The Debian Constitution Source: (Ronneburg, 2006) Appendix 1: Methodology For our analysis, the ‘community’ phenomenon was central to the analysis of the history of the Debian OSS community. We concluded that this perspective better explains the organizational changes during the growth of the Debian community compared to other explanations found in the literature on OSS communities.\(^3\) We closely followed the development of other OSS communities (such as Apache, Linux or Pearl) and other OSS communities developing packaged software distributions (Red Hat, SuSe). Our aim was not only to better understand the specifics of open software programming and distribution (e.g. Kraut and Streeter (1995)) but also to characterize general (as well as specific) factors driving the growth of OSS communities. For this purpose, we extensively examined websites of these OSS communities and subscribed to different mailinglists such as floss or the linux kernel. In order to characterize governance mechanisms during transition of OSS communities, we examined the history of the Debian OSS community based on data triangulation. As this method involves the use of different sources of data/information (Denzin and Lincoln, 1998, Marshall and Rossman, 1999), it allows to characterize the different perspectives of stakeholders within the Debian community like Debian project leaders, maintainers or developers. It also enabled us to get an understanding of specifics of the Debian community compared to other OSS communities. To examine, in more detail, the development of the Debian OSS community, a wide variety of data sources were consulted: Primarily we used internal documents related to the content and context of different Debian projects. We complemented the analysis with semi-structured interviews (both face-to-face and by telephone) with key individuals (DPL leaders, maintainer, developers) during the period 2002 - 2005. Similar to Dafermos (2001), we used semi-structured interviews as they provide more detailed information of greater value than straightforward question and answer sessions, especially when the research is explorative (Dafermos, 2001). These semi-structured interviews were also useful in engaging in a continuous conversation with the interviewees. The face-to-face interviews were taped and transcribed verbatim. As a check, the interviews were sent to the interviewees for comments. The interviews that were undertaken by telephone were written down as accurately as possible. Again, the transcripts were sent to interviewees in order to check their accurateness. The enormous willingness of participants to contribute to this research, e.g. by interviews and e-mail interaction has been remarkable in particular in the Netherlands. Debian developers were very supportive and helpful and always willing to travel to participate in interviews. Even developers from other places in the world said that they would help, however as one of them remarked: “Of course, I’m willing to contribute, but when I detect ‘cluelessness’ from the side of the researcher, I will invest my time in something else.” Furthermore, we attended several Debian conferences and were “lurking around” on the Debian mailinglists, websites, IRC channels, etc. We identified the Debian-devel(opment) mailinglist, as it is the most important (the “head” mailinglist) of the project, and we analyzed a few threads of messages on the Debian-devel mailinglist. Interviews were used to gain further insights into the Debian community. In addition, articles on Slashdot.org, members’ biographical writings and diaries, previous interviews with key members and descriptions of the community written by other researchers and key people were extensively utilized. After having established initial contacts, a kind of network of participants developed. Members of the community pointed out: “You could ask this member about that,” or, “I know someone who can help you with that.” In that way we were introduced to most interviewees and important contributors to the Debian project. Several pages on the Debian homepage also pointed out key people in the Debian project. Based on this approach, we met diverse programmers, from the inner circle to newcomers on the project, which made the range of responses quite broad. In addition, we posted an overview of this case study on one of the Debian mailinglists and asked people for comments; this also brought us in touch with members of the community. A draft of the case was sent to Debian members, who provided additional (and valuable) comments. As a result we were able to follow the Debian project in great detail with respect to its history as well as its ongoing development and activities. This methodology enabled us to characterize the growth of the Debian OSS community as a process in which not only a differentiated role structures emerged that both reflected and supported its activities but different forms of governance were implemented. <table> <thead> <tr> <th>Year</th> <th>Title</th> <th>Authors</th> </tr> </thead> <tbody> <tr> <td>2007-01</td> <td>Developing science, technology and innovation indicators: what we can learn from the past</td> <td>Christopher Freeman &amp; Luc Soete</td> </tr> <tr> <td>2007-02</td> <td>The impact of innovation activities on productivity and firm growth: evidence from Brazil</td> <td>Micheline Goedhuys</td> </tr> <tr> <td>2007-03</td> <td>Estimations of US debt dynamics: Growth cum debt and the savings glut in Kouri’s model</td> <td>Thomas Ziesemer</td> </tr> <tr> <td>2007-05</td> <td>How Do Consumers Make Choices? A Summary of Evidence from Marketing and Psychology</td> <td>Zakaria Babutsidze</td> </tr> <tr> <td>2007-06</td> <td>Inter-firm Technology Transfer: Partnership-embedded Licensing or Standard Licensing Agreements?</td> <td>John Hagedoorn, Stefanie Lorenz-Orlean &amp; Hans Kranenburg</td> </tr> <tr> <td>2007-08</td> <td>Location and R&amp;D alliances in the European ICT industry</td> <td>Rajneesh Narula &amp; Grazia D. Santangelo</td> </tr> <tr> <td>2007-09</td> <td>How do social capital and government support affect innovation and growth? Evidence from the EU regional support programmes</td> <td>Semih Akcomak &amp; Bas ter Weel</td> </tr> <tr> <td>2007-12</td> <td>The Spatial Hierarchy of Technological Change and Economic Development in Europe</td> <td>Bart Verspagen</td> </tr> <tr> <td>2007-13</td> <td>The origins and implications of using innovation systems perspectives in the design and implementation of agricultural research projects: Some personal observations</td> <td>Andy Hall</td> </tr> <tr> <td>2007-14</td> <td>Technology supply chain or innovation capacity?: Contrasting experiences of promoting small scale irrigation technology in South Asia</td> <td>Andy Hall, Norman Clark and Guru Naik</td> </tr> <tr> <td>2007-15</td> <td>Are firms that received R&amp;D subsidies more innovative?</td> <td>Charles Bérubé &amp; Pierre Mohnen</td> </tr> <tr> <td>2007-16</td> <td>Foreign direct investment and firm level productivity. A panel data analysis</td> <td>Geoffrey Gachino</td> </tr> <tr> <td>2007-17</td> <td>Technological spillovers from multinational presence towards a conceptual framework</td> <td>Geoffrey Gachino</td> </tr> <tr> <td>No.</td> <td>Title</td> <td>Authors</td> </tr> <tr> <td>-----</td> <td>----------------------------------------------------------------------</td> <td>----------------------------------------------------------------------------------------------</td> </tr> <tr> <td>2007-18</td> <td>Technological capability building through networking strategies within high tech industries</td> <td>Wim Vanhaverbeke, Bonnie Beerkens and Geert Duysters</td> </tr> <tr> <td>2007-19</td> <td>External technology sourcing: the effect of uncertainty on governance mode choice</td> <td>Vareska van de Vrande, Wim Vanhaverbeke &amp; Geert Duysters</td> </tr> <tr> <td>2007-20</td> <td>Exploration and Exploitation in Technology-based Alliance Networks</td> <td>Wim Vanhaverbeke, Victor Gilsing, Bonnie Beerkens, Geert Duysters</td> </tr> <tr> <td>2007-21</td> <td>ICT Externalities: Evidence from cross country data</td> <td>Huub Meijers</td> </tr> <tr> <td>2007-23</td> <td>R&amp;D offshoring and technology learning in emerging economies: Firm-level evidence from the ICT industry</td> <td>Zhe Qu, Can Huang, Mingqian Zhang &amp; Yanyun Zhao</td> </tr> <tr> <td>2007-24</td> <td>The Environmental Porter Hypothesis: Theory, Evidence and a Model of Timing of Adoption</td> <td>Ben Kriechel &amp; Thomas Ziesemer</td> </tr> <tr> <td>2007-25</td> <td>Measuring the Effectiveness of R&amp;D tax credits in the Netherlands</td> <td>Boris Lokshin &amp; Pierre Mohnen</td> </tr> <tr> <td>2007-26</td> <td>The productivity effects of internal and external R&amp;D: Evidence from a dynamic panel data model</td> <td>Boris Lokshin, René Belderbos &amp; Martin Carree</td> </tr> <tr> <td>2007-27</td> <td>National System of Innovations and the role of demand. A cross country comparison</td> <td>M. Abraham Garcia-Torre</td> </tr> <tr> <td>2007-28</td> <td>The Global Challenges of the Knowledge Economy: China and the EU</td> <td>Can Huang and Luc Soete</td> </tr> <tr> <td>2007-29</td> <td>Redefining Foreign Direct Investment Policy: A Two Dimensional Framework</td> <td>Sergey Filippov &amp; Ionara Costa</td> </tr> <tr> <td>2007-30</td> <td>Redefining the Nexus between Foreign Direct Investment, Industrial and Innovation Policies</td> <td>Ionara Costa &amp; Sergey Filippov</td> </tr> <tr> <td>2007-31</td> <td>Innovation and Competitive Capacity in Bangladesh's Pharmaceutical Sector</td> <td>Padmashree Gehl Sampath</td> </tr> <tr> <td>2007-32</td> <td>R&amp;D collaboration networks in the European Framework Programmes: Data processing, network construction and selected results</td> <td>Thomas Roediger-Schluga &amp; Michael J. Barber</td> </tr> <tr> <td>2007-33</td> <td>Determinants of alliance portfolio complexity and its effect on innovative performance of companies</td> <td>Geert Duysters and Boris Lokshin</td> </tr> <tr> <td>2007-34</td> <td>Strategic Partnering with Chinese companies: Hidden motives and treasures</td> <td>Geert Duysters, Tina Saebi &amp; Dong Qinqin</td> </tr> <tr> <td>2007-35</td> <td>Approach for analysing capabilities in latecomer software companies</td> <td>Rossitza Rousseva</td> </tr> <tr> <td>2007-36</td> <td>Foreign-owned firms and technological capabilities in the Argentinean manufacturing industry</td> <td>Ionara Costa &amp; Anabel Marin</td> </tr> <tr> <td>2007-37</td> <td>Short-term effects of new universities on regional innovation</td> <td>Robin Cowan and Natalia Zinovyeva</td> </tr> <tr> <td>2007-38</td> <td>Challenges to Strengthening Agricultural Innovation Systems: Where Do We Go From Here?</td> <td>Andy Hall</td> </tr> </tbody> </table> Transition of Governance in a Mature Open Software Source Community: Evidence from the Debian Case by Bert M. Sadowski, Gaby Sadowski-Rasters and Geert Duysters
{"Source-Url": "http://collections.unu.edu/eserv/UNU:518/wp2007-039.pdf", "len_cl100k_base": 12832, "olmocr-version": "0.1.50", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 81294, "total-output-tokens": 16247, "length": "2e13", "weborganizer": {"__label__adult": 0.0005159378051757812, "__label__art_design": 0.001506805419921875, "__label__crime_law": 0.0006804466247558594, "__label__education_jobs": 0.0269927978515625, "__label__entertainment": 0.0002865791320800781, "__label__fashion_beauty": 0.0002505779266357422, "__label__finance_business": 0.01317596435546875, "__label__food_dining": 0.0004968643188476562, "__label__games": 0.0017385482788085938, "__label__hardware": 0.0009450912475585938, "__label__health": 0.00048279762268066406, "__label__history": 0.0017118453979492188, "__label__home_hobbies": 0.0004949569702148438, "__label__industrial": 0.0005121231079101562, "__label__literature": 0.001293182373046875, "__label__politics": 0.0027675628662109375, "__label__religion": 0.0004696846008300781, "__label__science_tech": 0.051727294921875, "__label__social_life": 0.0012264251708984375, "__label__software": 0.0777587890625, "__label__software_dev": 0.8134765625, "__label__sports_fitness": 0.0003044605255126953, "__label__transportation": 0.0005898475646972656, "__label__travel": 0.0003726482391357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69209, 0.03165]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69209, 0.19403]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69209, 0.93724]], "google_gemma-3-12b-it_contains_pii": [[0, 185, false], [185, 185, null], [185, 2158, null], [2158, 2158, null], [2158, 3558, null], [3558, 4991, null], [4991, 6188, null], [6188, 7930, null], [7930, 9433, null], [9433, 10951, null], [10951, 11216, null], [11216, 12553, null], [12553, 14397, null], [14397, 15898, null], [15898, 17396, null], [17396, 18716, null], [18716, 20104, null], [20104, 21629, null], [21629, 22837, null], [22837, 24355, null], [24355, 25513, null], [25513, 26906, null], [26906, 28181, null], [28181, 29729, null], [29729, 31242, null], [31242, 32849, null], [32849, 34325, null], [34325, 35701, null], [35701, 37298, null], [37298, 39516, null], [39516, 41450, null], [41450, 41869, null], [41869, 43219, null], [43219, 44248, null], [44248, 45871, null], [45871, 47574, null], [47574, 49226, null], [49226, 49731, null], [49731, 53198, null], [53198, 55616, null], [55616, 55677, null], [55677, 57026, null], [57026, 58531, null], [58531, 60047, null], [60047, 60632, null], [60632, 64610, null], [64610, 69049, null], [69049, 69209, null]], "google_gemma-3-12b-it_is_public_document": [[0, 185, true], [185, 185, null], [185, 2158, null], [2158, 2158, null], [2158, 3558, null], [3558, 4991, null], [4991, 6188, null], [6188, 7930, null], [7930, 9433, null], [9433, 10951, null], [10951, 11216, null], [11216, 12553, null], [12553, 14397, null], [14397, 15898, null], [15898, 17396, null], [17396, 18716, null], [18716, 20104, null], [20104, 21629, null], [21629, 22837, null], [22837, 24355, null], [24355, 25513, null], [25513, 26906, null], [26906, 28181, null], [28181, 29729, null], [29729, 31242, null], [31242, 32849, null], [32849, 34325, null], [34325, 35701, null], [35701, 37298, null], [37298, 39516, null], [39516, 41450, null], [41450, 41869, null], [41869, 43219, null], [43219, 44248, null], [44248, 45871, null], [45871, 47574, null], [47574, 49226, null], [49226, 49731, null], [49731, 53198, null], [53198, 55616, null], [55616, 55677, null], [55677, 57026, null], [57026, 58531, null], [58531, 60047, null], [60047, 60632, null], [60632, 64610, null], [64610, 69049, null], [69049, 69209, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69209, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69209, null]], "pdf_page_numbers": [[0, 185, 1], [185, 185, 2], [185, 2158, 3], [2158, 2158, 4], [2158, 3558, 5], [3558, 4991, 6], [4991, 6188, 7], [6188, 7930, 8], [7930, 9433, 9], [9433, 10951, 10], [10951, 11216, 11], [11216, 12553, 12], [12553, 14397, 13], [14397, 15898, 14], [15898, 17396, 15], [17396, 18716, 16], [18716, 20104, 17], [20104, 21629, 18], [21629, 22837, 19], [22837, 24355, 20], [24355, 25513, 21], [25513, 26906, 22], [26906, 28181, 23], [28181, 29729, 24], [29729, 31242, 25], [31242, 32849, 26], [32849, 34325, 27], [34325, 35701, 28], [35701, 37298, 29], [37298, 39516, 30], [39516, 41450, 31], [41450, 41869, 32], [41869, 43219, 33], [43219, 44248, 34], [44248, 45871, 35], [45871, 47574, 36], [47574, 49226, 37], [49226, 49731, 38], [49731, 53198, 39], [53198, 55616, 40], [55616, 55677, 41], [55677, 57026, 42], [57026, 58531, 43], [58531, 60047, 44], [60047, 60632, 45], [60632, 64610, 46], [64610, 69049, 47], [69049, 69209, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69209, 0.2682]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b680e1032fb7fa847b9ecaa924b25cd92f9098c4
A Code Morphing Methodology to Automate Power Analysis Countermeasures Giovanni Agosta Politecnico di Milano Piazza Leonardo da Vinci, 32 20133 Milano, Italy agosta@elet.polimi.it Alessandro Barenghi Politecnico di Milano Piazza Leonardo da Vinci, 32 20133 Milano, Italy barenghi@elet.polimi.it Gerardo Pelosi Politecnico di Milano Piazza Leonardo da Vinci, 32 20133 Milano, Italy pelosi@elet.polimi.it ABSTRACT We introduce a general framework to automate the application of countermeasures against Differential Power Attacks aimed at software implementations of cryptographic primitives. The approach enables the generation of multiple versions of the code, to prevent an attacker from recognizing the exact point in time where the observed operation is executed and how such operation is performed. The strategy increases the effort needed to retrieve the secret key through hindering the formulation of a correct hypothetical consumption to be correlated with the power measurements. The experimental evaluation shows how a DPA attack against OpenSSL AES implementation on an industrial grade ARM-based SoC is hindered with limited performance overhead. Categories and Subject Descriptors C.3 [Special-Purpose and Application Based Systems]: Microprocessor/microcomputer applications; C.5.3 [Computer System Implementation]: Microcomputers[ portable devices]; General Terms Security Keywords Power Analysis Attacks, Software Countermeasures, Dynamic Code Transformation, Polymorphic Code 1. INTRODUCTION The general trend in embedded hardware security shows a large use of cryptographic operations, and an increasing attention towards tamper resistant designs and countermeasures against side-channel attacks like power analysis and fault injection. Indeed, it is effectively proven that the physical access to an embedded device may enable the recovery of sensitive information, which is otherwise supposed to be hidden [1,2,8], through exploiting both the implementation weaknesses of the cryptographic operations and specific features provided by the underlying hardware platform. Differential Power Analysis (DPA) introduced in [6] has been proven a powerful threat that triggered a flourishing research branch with a wide range of improvements and countermeasures both in hardware and software. DPA attacks against an unprotected implementation of a cryptographic algorithm follow a common workflow: first of all they measure the power consumption (power traces) of the targeted device for a high number of runs (i.e., considering a high number of input/output values). Subsequently, they select an intermediate operation of the algorithm employing a part of the secret key, and compute an expected consumption for every possible value of the key portion, according to a model of the triggered switching activity (e.g., the Hamming weight of the outputs). Finally, the predicted consumption values are matched against each sample of the recorded power traces to assess which key hypotheses fit better the actual measurements. In this fashion, the secret key can be recovered, one part at time, even if the relevant information is stored within the device in a non accessible way. The principal countermeasures against power analysis are split into two categories [8]: masking and hiding. Masking aims to invalidate the link between the predicted hypothetical power consumption values, associated to the selected intermediate operation, and the actual values processed by the device. In a masked implementation, each sensitive intermediate value is concealed through splitting it in a number of shares, which are then separately processed. Hence, the target algorithm is modified to correctly process each share and to recombine them at the end of the computation. A masking scheme with only two shares is composed by the values $v_m$ and $m$, where $m$ is a randomly chosen mask and $v_m$ is a share such that the value $v$ to be protected can be derived as $v = v_m \cdot m$, with $\cdot$ denoting an invertible binary operation. To compensate for this countermeasure, more sophisticated DPA attacks, known as high-order DPAs rely on predicting the consumption of all the operations handling the shares and try to obtain a combination of them independent from the masking values. This value must subsequently be correlated with an analogous combination of the measured consumption values, employing the same techniques of a common (first order) DPA. The technical effort in carrying out an high-order DPA attack quickly grows as the order (i.e., the number of shares) increases just as the time/space resources to be employed in recording a larger number of power traces. It is commonly accepted that a masking scheme with a large number of shares makes DPA attacks either practically unfeasible or inconvenient. Typically, engineering solutions strive to introduce a moderate overhead with respect to the unprotected version of the primitive, resorting to the combination of two-share masking schemes and hiding techniques [9]. Hiding methods aim to conceal the relation between the power consumption and the operations performed by the target algorithm to compute the intermediate values. The protection strategies employed in the open literature, to secure software implementations, are based on execution flow randomization via shuffling the order of some instructions (i.e., permuting the sequence of accesses to lookup tables) and inserting random delays built with dummy operations [9, 10]. To minimize the performance overhead, the execution must be interleaved with delays in multiple places, keeping the individual delays as short as possible. In this way, an attacker faces a cumulative and hardly predictable sum of delays between the start (the end, respectively) of the algorithm and the location of the observed intermediate operation in time [4]. These techniques only affect the time dimension of the power consumption but they do not change the power consumption characteristics of the operations performed by the target device. In spite of the limits of the aforementioned techniques, software countermeasures are well suited for general purpose processors where no dedicated security features are built in at design time. In addition to this broad range of applicability, software-based countermeasures represent also a viable mitigation mean to restore security into hardware-protected systems, where the underlying hardware protections have been compromised, without the need for an expensive part replacement. 1.1 Contributions The novel approach proposed in this work is a software countermeasure framework based on the combination of a cryptographic algorithm implementation with a polymorphic engine which dynamically and automatically transforms the binary code to be protected. Thus, we propose innovative contributions to two common practices in the field: (i) static generation of the protected code; (ii) manual and often application-specific generation of the protected code. Our method moves the code generation at run-time, and enables the generation of many different versions of the protected code at the designer’s will, preventing any attacker from both recognizing the exact point in time where the observed operation is executed and understanding how such operation is actually performed. This methodology separates the creative work of identifying replacements for assembly code snippets from the tedious (but amenable to automation) work of applying such replacements to the entire code. This strategy largely increases the effort needed to predict the value of a sensitive intermediate result of the considered algorithm and hinders the formulation of a correct hypothetical consumption to be correlated with the power measurements. Polymorphic engines are the key component to build a special class of programs: the ones characterized by the ability to modify parts of their own code. In particular, a polymorphic code is composed of two parts: the polymorphic engine, which never changes, and the target code to be modified. Self-modifying code is used in several areas to provide either optimization or obfuscation: dynamic compilers, and especially fragment linking [5], tamper resistant software and protection against reverse engineering of executable code [7]. We adapt self-modification principles to both swap parts of the target algorithm with different, but semantically equivalent, replacements and to implement concepts such as masking and hiding. In particular, concerning the hiding countermeasure techniques, our approach provides both time-dimension and switching activity hiding, through changing both the type of operation and the time needed to compute the same intermediate value. With respect to state of the art, we provide: (i) a generalization, through allowing several types of countermeasures to be applied in a unifying framework; (ii) an extension, through providing variants to existing countermeasures; (iii) an increased variability of the protected code, through re-generating it as often as needed by means of dynamic code morphing. To this end, we allow the countermeasure designer to specify, for each operation or group of operations, a set of code transformation templates that can be automatically applied to the binary code of the target cryptographic primitive. Sufficient generality is provided to allow the expression of random delays such as those proposed in [4], as well as to replicate other hiding strategies, such as those shown in [3]. The availability of a wide range of transformations in our framework allows the trade-off between performance overheads and security margin to be finely tuned at design time. 1.2 Case Study We chose as a case study platform an ARM926-based STMicroelectronics SPEAr SoC, as a representative of a large class of high-end embedded devices where commonly no hardware protection against side channel attacks are employed. The chosen cryptographic primitive is the AES, as implemented in the widely diffused OpenSSL toolkit. The choice of this testbed was driven by the large adoption of this algorithm as a mean to provide data confidentiality. The most common intermediate values employed during an attack to an AES implementation are represented by output of a load operation from the S-Box, triggered by the sunbyte step and by the output of the post-AddRoundKey state. However, in the selected platform, the consumption model relying on the latter is by far more effective than the one relying on the S-Box, due to the unpredictable power saving on the load operations caused by the use of data caches (see Section 4). Even if the number of successful DPA attacks against software implementations on complex SoCs is rather low due to the complexity of such devices, we were able to extract the full AES key from the testbed platform with a sensibly low number of measurements. Once the attack has been proven feasible, we employ the proposed framework to effectively and efficiently counteract the identified vulnerability. The remainder of the paper is organized as follows. Section 2 provides the definitions necessary to formalize the code morphing operations. Section 3 introduces our code transformation framework, and describes the proposed polymorphic engine. Section 4 presents the experimental evaluation on the target case study. Section 5 provides a brief overview of closely related works. Section 6 draws some conclusions and highlights future directions. 2. SEMANTIC EQUIVALENCE OF CODE FRAGMENTS Our approach of building a different version of a static binary code at run-time prior to executing it relies on the substitution of each static code fragment with one of its randomly-chosen variants, preserving the black-box behavior of the original static code. The notions of code fragment and semantic-equivalence between code fragments are formally defined as follows. **Definition 2.1 (Code Fragment).** A sequence of instructions $I = \{inst_1, \ldots, inst_n\}, n \geq 1$, is a code fragment when each term $inst_i \in I$ is executed exactly once, $inst_j$ executes before $inst_j', \forall j,j' \text{ such that } j < j' \leq s$, and no other instruction is executed between $inst_j$ and $inst_{j+1}, 1 \leq j < s$. Intuitively, the sequence of terms composing a code fragment must not include any branch or privileged instruction like a supervisor call, an I/O or an I/O MMU-bypass operation. **Definition 2.2 (Semantic Equivalence).** Let $I, \tilde{I}$ be two code fragments, and let $A(I), A(\tilde{I})$ be the sets of live-out variables related to $I$ and $\tilde{I}$, respectively (i.e., registers and memory locations that from the exit of the code fragment on are read at least once). A semantic-preserving relation between $I$ and $\tilde{I}$ is an equivalence relation, $\sim$, where $A(I) = A(\tilde{I})$ and the corresponding written live-out values are the same. Given a generic code fragment, the problem of constructing a set of semantically equivalent variants is not practically interesting without a precise characterization of the goals to be achieved (i.e., it is easy to generate an infinite set of equivalent code fragments from... a single-instruction loop). Therefore, in the following we formulate sufficient criteria to either map a given code fragment to a semantically-equivalent one or verify if two code fragments have the same semantics. The translation of the original static code is performed through locally scoped substitutions that are easier and more efficient to implement than global ones. Thus, the semantic equivalence of the resulting program is obtained from the composition of semantically equivalent independent code fragments, thus limiting the performance impact of the dynamic translation. **Proposition 2.1 (Code Fragment Equivalence).** Let \( I = (\text{inst}_1, \ldots, \text{inst}_s), s \geq 1, \) and \( \bar{I} = (\text{inst}_1, \ldots, \text{inst}_{\bar{s}}), \bar{s} \geq 1, \) be two code fragments. The semantic equivalence \( I \sim \bar{I} \) is preserved if: (1) every register \( \text{reg}_k, k \geq 0, \) of the CPU is employed in \( I, \) and \( \bar{I} \) according to the following constraints: (1.a) \( \text{reg}_k \) is either not included in any register assignment operations of \( I \) nor \( \bar{I} \) or (1.b) \( \text{reg}_k \) is used only in \( I, \) where it is spilled to the memory before any assignment and filled back as last action, or (1.c) the collections of possible register values at the end of \( I \) and \( \bar{I} \) (computed with the same initial values) must be the same; (2) the memory assignments derived from the sequence of write-operations in \( I \) are preserved in \( \bar{I}, \) with the same values; (3) any memory assignment performed in \( I \) but not in \( \bar{I} \) writes the stack segment in memory locations outside \( A(\bar{I}), \) i.e.: in memory locations after the position of the stack pointer at the end of \( I.\) **Proof.** The first condition implies that for all CPU registers the corresponding values at the end of \( I, \) and \( \bar{I} \) are exactly the same. Therefore, the condition required by Definition 2.2 that all live-out registers, \( A(I) \) and \( A(\bar{I}), \) have the same corresponding values is trivially satisfied. The second condition ensures that memory locations referred in \( I \) and \( \bar{I} \) are also written in the same order and with the same values. The last condition allows additional memory assignments in \( \bar{I} \) to target a larger portion of the stack segment w.r.t. \( I, \) nevertheless restoring the same value of the stack pointer in \( I \) at the end of the fragment \( \bar{I} \) as required by the first condition. The memory assignments are guaranteed not to be live-out, which matches the requirement stated in Definition 2.2. It is important to note that the sufficient conditions stated in Proposition 2.1 are locally verifiable, while Definition 2.2 employs the liveness property, which must be computed for all memory locations and registers. This requires a global analysis, unfeasible at run-time, and not always feasible at all even at compile-time if the whole address space is considered. By contrast, the conditions in Proposition 2.1 only rely on information which can be retrieved through a linear scan. ### 3. Transformation Framework The proposed framework takes as input a target cryptographic algorithm, and statically compiles it to produce a binary code with proper calls to a run-time library which implements the code morphing engine and is in charge of modifying the target code on the underlying architecture. Figure 1 reports an high level description of the code transformation flow. The static compiler operates within the standard compilation process from source code to machine code. The source program is written as a C code with some specific annotations. Code annotations are usually employed to encode more information for the compiler than the explicit source code, such as hints about how to organize or optimize the intermediate representations of the code. In our case, we employ a custom \_attribute\_ (defined as: \_attribute\_((secure)) before a function or variable declaration) to specify to the compiler which functions and data structures need to be secured. In the case of functions, for each stage of the static compiler (left side of Figure 1), the transformation flow wraps their calls with invocations of the Code Morphing, Rescheduling, and Array Access Permutation run-time routines. The case of array variable declarations is managed by the third stage of the static compiler (Array Access Permutation Setup) which makes the access to every array cell as an indirect access through using a further array (with the same length) containing a permutation of indexes. The Code Morphing Setup stage also allocates in memory the data structures (tile set) containing the knowledge base needed to perform the code morphing, through substituting each code fragment with one of its semantically equivalent variants included in the tile set. At run-time, the polymorphic engine, reported on the right-hand side of Figure 1, changes the original binary code through performing three steps: Code Morphing, Rescheduling, and Array Access Permutation. The Code Morphing stage, which is the core of the proposed approach, is described in greater detail in the next section. The Rescheduling step adds a level of obfuscation with respect to the possible recognition (or classification) of the executed code through rearranging the instructions within a finite window, in such a way to preserve the data dependencies. This operation is performed by means of a single scan of the code, from the bottom to the top. At each step, a single instruction \( \text{inst}_i \) and a window of \( k \) instructions preceding it \( \{\text{inst}_{i-1}, \ldots, \text{inst}_{i-k}\} \) are considered. The dependencies of \( \text{inst}_i \) are computed, and the earliest position \( i - h \) \( (h \leq k) \) which it can take is determined. Then, a random position \( i = l \) in the range \( [i - h, i] \) is selected, and the instructions are reordered consequently. Finally, the next value of \( i \) is set to \( i - l, \) and the process continues until the start of the code is reached. This technique is based on the well-known code scheduling theory employed in the field of compiler optimization. The code scheduling theory provides a set of semantic preserving schedules for a given code. Where the compiler practices select a schedule to minimize latency and/or power consumption, our goal is to randomly change a given schedule with a different (but equivalent) one to alter the shape and mutual alignment of the power traces. The Array Access Permutation step applies a random permutation to the allocated array indexes, hiding the access patterns to the substitution table of a symmetric cipher. The access pattern hiding technique has been proposed and detailed in [9, 10]. Since the access to substitution tables is not the preferred attack point in our testbed platform, for the sake of brevity, we refer to the aforementioned works for further details. 3.1 Code Morphing Engine To apply code morphing to a wide variety of code fragments, it is necessary to represent them in a way that abstracts from the actual registers and immediate values employed (e.g., add r2, r5, r2 only differs from add r4, r11, r4 in the registers used, and both can be abstracted to a normal form add r0, r1, r0). To clarify the normalization process, we will now formally define the concepts of register normalization and constant normalization. **Definition 3.1 (Register Normalization).** Given a code fragment $\mathcal{I} = \{\text{inst}_1, \ldots, \text{inst}_s\}$, $s \geq 1$, where $\text{inst}_i$ is a data processing assembly instruction specified as $\text{op} \text{dest}, \text{src}, \text{operand}$, a register normalization is a map of the register names appearing in the fields $[\text{dest}, \text{src}, \text{operand}]$ of the instruction sequence, into register names starting from r0 for the first register in $\text{inst}_1$ on. **Definition 3.2 (Constant Normalization).** Given a code fragment $\mathcal{I} = \{\text{inst}_1, \ldots, \text{inst}_s\}$, $s \geq 1$, where $\text{inst}_i$ is a data processing assembly instruction specified as $\text{op} \text{dest}, \text{src}, \text{operand}$, a constant normalization is a map of the immediate values appearing in the fields $[\text{src}, \text{operand}]$ of each instruction, into immediate values starting from #0 on, to be interpreted as symbolic constants in the resulting instruction. Building on the two previous definitions the normalized code fragment can be defined as follows: **Definition 3.3 (Normalized Code Fragment).** Given a code fragment $\mathcal{I} = \{\text{inst}_1, \ldots, \text{inst}_s\}$, $s \geq 1$, a normalized code fragment $\tilde{\mathcal{I}}$, is the sequence of instructions resulting from the application of both register (Definition 3.1) and constant (Definition 3.2) normalization mappings. In principle, for each normalized code fragment $\tilde{\mathcal{I}}_i$, $i \geq 1$, a set of $m \geq 1$ semantically equivalent fragments $S_{\tilde{\mathcal{I}}_i} = \{\tilde{\mathcal{I}}_{i,0}, \ldots, \tilde{\mathcal{I}}_{i,m-1}\}$, $\tilde{\mathcal{I}}_{i,j} = \tilde{\mathcal{I}}_{i}$, $\forall j \in \{0, \ldots, m-1\}$, can be written through applying the sufficient conditions specified by Proposition 2.1. Note that any non-trivial $S_{\tilde{\mathcal{I}}_i}$ set must be created manually. **Example 3.1.** Consider a code fragment $\mathcal{I}$ composed by a single instruction $\text{inst} : \text{eor} \ r5, r5, r4$, which writes into r5 the bitwise exclusive-or of the values in r0 and r4 (r5=0x00000). Its normalized form is computed as $\text{eor} \ r0, r1, r2$, while a corresponding semantically equivalent fragment is given by $\tilde{\mathcal{I}} = \{\text{bic} \ r0, r1, r2; \text{bic} \ r3, r2, r1; \text{orr} \ r0, r0, r3\}$. The use of extra registers before their use, and restoring them at the end of the instruction sequence in $\tilde{\mathcal{I}}$. It is possible to generate large $S_{\tilde{\mathcal{I}}}$ sets, which must be stored in memory and used as a knowledge base for the Code Morphing phase. Therefore, a key issue is to provide a compact representation for them. Note that, for the same $\tilde{\mathcal{I}}$, we can generate several $\tilde{\mathcal{I}}$, which differ only by the values assumed by some of the constants involved. Given the normalized fragment $\tilde{\mathcal{I}} = (\text{and} \ r0 \ r1 \ #0)$, the designer may want to replace it by applying the following transformation: $$r0 \leftarrow (r1 \land (#0 \oplus \text{const1}) \lor (r1 \land (#0 \oplus \text{const2}))$$ where $\odot$ is $\land$ or $\lor$ and $\text{const1}$ and $\text{const2}$ are additional symbolic constants such that $\text{const1} = \text{const2} = 0x\ldots f$ if $\odot = \land$ and $\text{const1} \land \text{const2} = 0x0$ if $\odot = \lor$. Two semantically equivalent fragments $\tilde{\mathcal{I}}_0$, $\tilde{\mathcal{I}}_1$, to $\tilde{\mathcal{I}}$ are: $$\tilde{\mathcal{I}}_0 = (\text{and} r0, r1, #0 \oplus \text{const1})$$ $$\tilde{\mathcal{I}}_1 = (\text{and} r0, r1, #0 \oplus \text{const1})$$ where $\text{const1}$ is an additional symbolic constant that can assume any value, whereas the symbolic constants #0, #1, ... derived from the constant normalization are constrained to their original immediate value, and r2 is a clobbered register. We formalize the concept of a normalized code fragment augmented with operations on symbolic constants as follows. **Definition 3.4 (Tile).** Given a normalized code fragment $\tilde{\mathcal{I}}$, a tile $t_i$ is a set of normalized fragments semantically equivalent to $\tilde{\mathcal{I}}$, distinguished only by the values of additional symbolic constants $\text{const}_j$. These constants appear as immediate operands in the instructions of the code fragments, either alone or as part of constant expressions containing arithmetic-logic operators and symbolic constants from $\tilde{\mathcal{I}}$. The expressions on symbolic constants are encoded within the bits used to encode the immediate operand fields of each instruction in the tile. As for the code fragments, it is possible to write a set $S_{\tilde{\mathcal{I}}}$, which is a collection of tiles for the normalized code fragment $\tilde{\mathcal{I}}$. For each normalized code fragment $\tilde{\mathcal{I}}_0, \tilde{\mathcal{I}}_1, \ldots$, the collection of the semantically equivalent tiles $\{S_{\tilde{\mathcal{I}}}_0, S_{\tilde{\mathcal{I}}}_1, \ldots\}$, which must be protected, represents the whole tile set of the morphing engine. The complete workflow of the code morphing engine, shown in Figure 2, is composed as follows. First, the input code fragment $\mathcal{I}$ is mapped to a normalized code fragment $\tilde{\mathcal{I}}$ through the Register/Constant Normalization step. Then, a tile is randomly selected from the Tile Set to replace the normalized code fragment. The Register/Constant Denormalization procedure is then applied to the registers and symbolic constants. Finally, in the Constant Values Computation step, any symbolic constant is replaced by a random value, and the constant expressions encoded in the tile are evaluated to obtain the immediate operands. 4. EXPERIMENTAL EVALUATION The experimental platform used to provide a validation of our framework is an ARM-based STMicroelectronics SPEAr Head200 development board. The SPEAr SoC is based on a 32-bit ARM926EJ-S processor running at 133 MHz, without any OS, for the sake of more precise analysis of the results. The AES binary, based on OpenSSL 1.0.0d, with 4 T-tables, is run directly from the U-Boot bootloader. The attack exploits the outcome of the xor operation in the first ADDROUNDKEY, which is stored in a register. 4.1 Performance evaluation Both the protected and the unprotected systems have been evaluated to assess the time overhead introduced by the countermeasure. Through employing a GCC 4.3 based cross-compile toolchain, we observe that the original AES algorithm is composed of 536 instructions, clustered in 20 distinct normalized instructions. We employ three, 4-instruction long, tiles to protect the 64 eor code fragments in the whole AES. Thus, instruction replacement adds on average 4 computational instructions for each substituted eor, plus a load/store pair to handle clobbering. Still, we expect a reduced overhead, as most of the AES execution time is spent in accessing memory for the T-table lookups. The timings have been gathered directly via trace length measurements on the oscilloscope. On average, a run of the unprotected AES algorithm takes 228.8 µs, while a run of the protected one runs for 245 µs, thus resulting in a performance hit of 8.2% of the AES execution time. The overhead due to the code morphing amounts to 90 ms per morphing action. Since the code morphing algorithm is intrinsically memory bound, the majority of this delay is ascribed to accesses to the off-chip memory in our platform. To cope with the additional performance hit, the number of calls to the code morpher over the encryption algorithm runs can be tuned to bring the amortized encryption time within acceptable bounds for the target system. The following section shows how the overhead is reduced to a fraction of the time of one encryption run, and evaluates the security margin provided. 4.2 Differential power analysis Measured power traces were obtained with an Agilent InfiniumDSO80204B oscilloscope and an active Agilent 1131A differential voltage probe with a 3.5 GHz analog bandwidth. The oscilloscope features 4 independent analog channels, a 2 GHz analog bandwidth, coupled with an 8-bit ADC capable of recording 40 Gsamples/s, with a noise floor of 3 mV RMS, and a minimum vertical resolution of 10 mV. The measured power traces have been acquired using a sampling frequency of 500 Msamples/s over an acquisition window of 100 ksamples. The trigger signal is provided via a GPIO pin on the board and collected via a passive probe connected via an Agilent E2697A impedance adapter. The power measurements have been obtained via measuring the voltage drop at the ends of a 1 Ω SMD resistor inserted on the SPEAr SoC power supply line. To reduce measurement noise, each trace is the result of the average of 32 measurements with the same plaintext and code. This represents a worst case scenario, as a real world attacker will not be able to choose which code variant is running to get averaged measurements. An attacker might trade off measurements of different plaintexts to gain noise reduction via averaging; however, the maximum number of measurements with a single code variant is bound by a design parameter, i.e. the number of encryption runs before a call to the morphing engine is made. A first order Differential Power Analysis against both unprotected and protected AES implementations has been used as testbench. The employed consumption model is the Hamming weight of one byte of the output of the first ADDROUNDKEY operation. This operation is computed 32 bits at a time, since that is the size of the ARM architecture word length. The analysis computes the sample estimation $r$ of Pearson’s correlation coefficient $\rho$ between each sample of the actual power consumption measurements (traces) of the device and the consumption model for each possible hypothetical value of the involved key part. Subsequently, the maximum values of $r$ obtained for each key hypothesis are sorted in decreasing order. For the attack to succeed, the confidence interval $I_r$ of the maximum value should not overlap with any of the others. The correct key hypothesis can be successfully obtained when enough traces are gathered, as the width of $I_r$ decreases when the number of measurements increases. An unbiased estimator for the Pearson correlation coefficient $\rho$ is $\hat{\rho} = r \left(1 + \frac{3}{n-1} - \frac{3}{n-1} \left(1 + \frac{1}{n-1}\right)\right)$, where $n$ is the number of employed samples. To obtain the boundaries of the interval $I_r$, the probability $\text{Prob}\left\{\hat{\rho} \in [r_l, r_u]\right\} \geq \gamma$ is evaluated for a chosen confidence level $\gamma$. The theoretical correlation coefficient for the correct key hypothesis $\rho_c$ is 0.250 since the observed operation is performed on 32 bits at a time, while the consumption model takes into account only 8 of them. By contrast, the key hypotheses differing by a single bit from the correct one (the best wrong guessed) has a theoretical correlation coefficient $\rho_w = 0.218$. The values of the estimators $\hat{\rho}_c, \hat{\rho}_w$ will converge to $\rho_c$ and $\rho_w$. Figure 3a shows that 11600 traces are necessary to distinguish the correct key hypothesis from the best wrong guess with a confidence level $\gamma = 0.8$. Thus the architecture does not provide any embedded protection against side channel attacks, despite the complexity of the SoC. Two crucial factors for a power analysis success are represented by the knowledge of the implementation strategy of the attacked operation, which allows to infer its consumption model, and the perfect time alignment of each trace. The devised countermeasure operates on both factors, thus actively hindering the attack. Consequently, it is sufficient to perform a code morphing action often enough to avoid the collection of a significant number of traces by the attacker. The maximum number of measurements with the same (albeit unknown) code variant $\Delta n$ that an attacker is able to collect is thus a design-time-chosen parameter indicating the security margin of the system. Figure 3b shows the result of the attack performed while our protection methodology was in action with $\Delta n=100$. In this case, the confidence interval for the correct key hypothesis and the best wrong guess never separate. In addition to this, the correct key has a sample correlation coefficient lower than the best wrong guess. As a further validation of our approach, we correlated the key hypotheses evaluated through the same consumption model with random values, obtaining a peak sample correlation value higher ($\rho_c > 0.08$) than both the previous estimates for $\rho_w$ and $\rho_c$. Thus, collecting a greater number of traces will not be useful due to the negligible values of the obtained sample correlation estimates. A practical measure of the security margin is given by the overlap percentage of the confidence intervals of the correct key and the best wrong guess. We note that this measure is a conservative gauge of the actual security margin, as the attacker fails to retrieve the key also when the confidence interval of the best wrong guess is both disjoint and higher than the one of the correct key. In this case, the overlap percentage is zero but the attack does not succeed. However, the opposite scenario does never happen as an overlap of the confidence intervals unquestionably indicates the indistinguishability of the corresponding key hypotheses. The security margin obtained via code morphing for the target platform can be traded off to achieve a lower computational overhead per encryption run. The computational overhead for the encryption is composed of a fixed cost determined by the tile substitution action and an amortized cost over $\Delta n$ runs due to the call to the morphing engine. Table 1 reports the trends of the security margin (i.e., the confidence intervals overlap) and the average execution time of single AES as a function of the value of $\Delta n$. We verified that the security margin of this platform does not report significant hits up to an attack with a half of the traces needed to retrieve the correct key on an unprotected implementation. The optimal trade-off points for this platform are represented by running the code morpher once every 1000 runs due to the call to the morphing engine. Table 1 reports the trends of the security margin (i.e., the confidence intervals overlap) and the average execution time of single AES as a function of the value of $\Delta n$. Table 1: Impact of the number of runs among morphing actions on both the security margin of the system (i.e., overlap of the correct key and best wrong guess confidence intervals) and execution time of a single protected AES encryption (optimal tradeoffs in grey). Execution time of a plain AES is 228.8 $\mu$s <table> <thead> <tr> <th>Code Morphing Interval [no. of runs]</th> <th>Confidence Intervals Overlap [%]</th> <th>Average Execution Time</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>79.55</td> <td>$\times 5.00$</td> </tr> <tr> <td>200</td> <td>79.32</td> <td>$\times 3.04$</td> </tr> <tr> <td>400</td> <td>79.03</td> <td>$\times 2.05$</td> </tr> <tr> <td>600</td> <td>78.89</td> <td>$\times 1.73$</td> </tr> <tr> <td>800</td> <td>78.89</td> <td>$\times 1.56$</td> </tr> <tr> <td>1000</td> <td>78.98</td> <td>$\times 1.46$</td> </tr> <tr> <td>2000</td> <td>79.76</td> <td>$\times 1.27$</td> </tr> <tr> <td>3000</td> <td>79.04</td> <td>$\times 1.20$</td> </tr> <tr> <td>4000</td> <td>75.16</td> <td>$\times 1.17$</td> </tr> <tr> <td>5000</td> <td>67.64</td> <td>$\times 1.15$</td> </tr> <tr> <td>6000</td> <td>56.94</td> <td>$\times 1.14$</td> </tr> <tr> <td>11600</td> <td>0.00</td> <td>$\times 1.10$</td> </tr> </tbody> </table> 5. RELATED WORK The distinguishing feature of our solution lies in the code substitution technique, but schemes such as those proposed in [4, 8–10] can be implemented within our framework by means of specific tiles. In these works, the results regarding the described implementations are mostly related to microcontroller platforms and exhibit case-study specific execution times ranging from two to more than fifty times the baseline. When considering attacks based on an a-posteriori model [8], such as template attacks, the very high number of code variants provided by our countermeasure would require a prohibitive number of traces to obtain a reliable model, since a significant quantity of information should be gathered for all of them. In [3] the authors propose a framework that employs an information theoretic metric to identify the most sensitive instructions of a software implementation of AES on an 8-bit microcontroller and apply a static local code modification implementing random precharging. W.r.t. [3], we propose an automated, dynamic code morphing approach, which can produce a much larger number of different semantically equivalent code versions, and it is also able to apply hiding and masking techniques together in a unified framework. 6. CONCLUDING REMARKS We presented a framework to automate the application of DPA software countermeasures at run-time and described a code morphing toolchain that proved to be efficient while ensuring protection for any cryptographic primitive. The proposed approach can be applied to either the whole algorithm or to the subset of vulnerable instructions to enhance performances. To our knowledge this is the first work providing this level of protection while being practically viable. The analyzed case study showed how to counter DPA attacks with an acceptable performance overhead. The overhead may be further reduced when protecting a multi-core platform via concurrently executing the encryption routine and the morphing action. Future works will target microcontrollers where the code is in a read-only memory via morphing the execution flow with a series of random jumps along a database of code fragments. 7. REFERENCES
{"Source-Url": "http://home.deib.polimi.it/barenghi/lib/exe/fetch.php?media=dac2012.pdf", "len_cl100k_base": 8669, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22102, "total-output-tokens": 9820, "length": "2e13", "weborganizer": {"__label__adult": 0.0008382797241210938, "__label__art_design": 0.0006451606750488281, "__label__crime_law": 0.0019817352294921875, "__label__education_jobs": 0.0003838539123535156, "__label__entertainment": 0.0001150369644165039, "__label__fashion_beauty": 0.00030159950256347656, "__label__finance_business": 0.0004010200500488281, "__label__food_dining": 0.00054931640625, "__label__games": 0.0013456344604492188, "__label__hardware": 0.0186004638671875, "__label__health": 0.0010318756103515625, "__label__history": 0.0004248619079589844, "__label__home_hobbies": 0.00026035308837890625, "__label__industrial": 0.0017652511596679688, "__label__literature": 0.00025463104248046875, "__label__politics": 0.0005860328674316406, "__label__religion": 0.000888824462890625, "__label__science_tech": 0.1878662109375, "__label__social_life": 8.958578109741211e-05, "__label__software": 0.0099334716796875, "__label__software_dev": 0.76953125, "__label__sports_fitness": 0.0005660057067871094, "__label__transportation": 0.0012655258178710938, "__label__travel": 0.00028824806213378906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41406, 0.0428]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41406, 0.36644]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41406, 0.88598]], "google_gemma-3-12b-it_contains_pii": [[0, 5516, false], [5516, 13231, null], [13231, 20265, null], [20265, 27084, null], [27084, 32521, null], [32521, 41406, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5516, true], [5516, 13231, null], [13231, 20265, null], [20265, 27084, null], [27084, 32521, null], [32521, 41406, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41406, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41406, null]], "pdf_page_numbers": [[0, 5516, 1], [5516, 13231, 2], [13231, 20265, 3], [20265, 27084, 4], [27084, 32521, 5], [32521, 41406, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41406, 0.12613]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
7e070558a1a0fac5949c9be7b8e67476f0ebca76
A study and toolkit for asynchronous programming in c# Okur, Semih; Hartveld, David; Dig, Danny; van Deursen, Arie DOI 10.1145/2568225.2568309 Publication date 2014 Document Version Accepted author manuscript Published in ICSE 2014 Proceedings of the 36th International Conference on Software Engineering Citation (APA) https://doi.org/10.1145/2568225.2568309 Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. A Study and Toolkit for Asynchronous Programming in C# Semih Okur, David L. Hartveld, Danny Dig and Arie van Deursen Report TUD-SERG-2013-016 A Study and Toolkit for Asynchronous Programming in C# Semih Okur¹, David L. Hartveld², Danny Dig³, Arie van Deursen² ¹University of Illinois ²Delft University of Technology ³Oregon State University okur2@illinois.edu d.l.hartveld@student.tudelft.nl digd@eecs.oregonstate.edu arie.vandeursen@tudelft.nl ABSTRACT Asynchronous programming is in demand today, because responsiveness is increasingly important on all modern devices. Yet, we know little about how developers use asynchronous programming in practice. Without such knowledge, developers, researchers, language and library designers, and tool vendors can make wrong assumptions. We present the first study that analyzes the usage of asynchronous programming in a large experiment. We analyzed 1378 open source Windows Phone (WP) apps, comprising 12M SLOC, produced by 3376 developers. Using this data, we answer 2 research questions about use and misuse of asynchronous constructs. Inspired by these findings, we developed (i) Asyncifier, an automated refactoring tool that converts callback-based asynchronous code to the new async/await; (ii) Corrector, a tool that finds and corrects common misuses of async/await. Our empirical evaluation shows that these tools are (i) applicable and (ii) efficient. Developers accepted 313 patches generated by our tools. 1. INTRODUCTION User interfaces are usually designed around the use of a single user interface (UI) event thread [16, 17, 24, 25]: every operation that modifies UI state is executed as an event on that thread. The UI “freezes” when it cannot respond to input, or when it cannot be redrawn. It is recommended that long-running CPU-bound or blocking I/O operations execute asynchronously so that the application (app) continues to respond to UI events. Asynchronous programming is in demand today because responsiveness is increasingly important on all modern devices: desktop, mobile, or web apps. Therefore, major programming languages have APIs that support non-blocking, asynchronous operations (e.g., to access the web, or for file operations). While these APIs make asynchronous programming possible, they do not make it easy. Asynchronous APIs rely on callbacks. However, callbacks invert the control flow, are awkward, and obfuscate the intent of the original synchronous code [38]. Recently, major languages (F# [38], C# and Visual Basic [8] and Scala [7]) introduced async constructs that resemble the straightforward coding style of traditional synchronous code. Thus, they recognize asynchronous programming as a first-class citizen. Yet, we know little about how developers use asynchronous programming and specifically the new async constructs in practice. Without such knowledge, other developers cannot educate themselves about the state of the practice, language and library designers are unaware of any misuse, researchers make wrong assumptions, and tool vendors do not provide the tools that developers really need. This knowledge is also important as a guide to designers of other major languages (e.g., Java) planning to support similar constructs. Hence, asynchronous programming deserves first-class citizenship in empirical research and tool support, too. We present the first study that analyzes the usage of asynchronous libraries and new language constructs, async/await in a large experiment. We analyzed 1378 open source Windows Phone (WP) apps, comprising 12M SLOC, produced by 3376 developers. While all our empirical analysis and tools directly apply to any platform app written in C# (e.g., desktop, console, web, tablet), in this paper we focus on the Windows Phone platform. We focus on WP apps because we expect to find many exemplars of asynchronous programming, given that responsiveness is critical. Mobile apps can easily be unresponsive because mobile devices have limited resources and have high latency (excessive network accesses). With the immediacy of touch-based UIs, even small hiccups in responsiveness are more obvious and jarring than when using a mouse or keyboard. Some sluggishness might motivate the user to uninstall the app, and possibly submit negative comments in the app store [37]. Moreover, mobile apps are becoming increasingly more important. According to Gartner, by 2016 more than 300 billion apps will be downloaded annually [18]. The goal of this paper is twofold. First, we obtain a deep understanding of the problems around asynchronous programming. Second, we present a toolkit (2 tools) to address exactly these problems. To this end, we investigate 1378 WP apps through tools and by hand, focusing on the following research questions: **RQ1:** How do developers use asynchronous programming? **RQ2:** To what extent do developers misuse async/await? We found that developers heavily use callback-based async- chronous idioms. However, Microsoft no longer officially recommends these asynchronous idioms [30] and has started to replace them with new idioms in new libraries (e.g., WinRT). Developers need to refactor callback-based idioms to new idioms that can take advantage of the async/await language constructs. The changes that the refactoring requires are non-trivial, though. For instance, developers have to inspect deep call graphs. Furthermore, they need to be extra careful to preserve the exception-handling. Thus, we implemented the refactoring as an automated tool, Asyncifier. We also found that nearly half of WPS apps have started to use the 9-month-old async/await keywords. However, developers misuse async/await in various ways. We define misuse as anti-patterns, which hurt performance and might cause serious problems like deadlocks. For instance, we found that 14% of methods that use (the expensive) async/await do this unnecessarily. 10% of methods do not follow an important good practice [22]. 1 out of 5 apps miss opportunities in async methods to increase asynchronicity, and developers (almost) always unnecessarily capture context, hurting performance. Thus, we implemented a transformation tool, Corrector, that finds and corrects the misuse async/await. This paper makes the following contributions: Empirical Study: To the best of our knowledge, this is the first large-scale empirical study to answer questions about asynchronous programming and new language constructs, async/await, that will be available soon in other major programming languages. We present implications of our findings from the perspective of four main audiences: developers, language and library designers, researchers, and tool vendors. Toolkit: We implemented the analysis and transformation algorithms to address the challenges (Asyncifier and Corrector). Evaluation: We evaluated our tools by using the code corpus and applied the tools hundreds of times. We show that our tools are highly applicable and efficient. Developers find our transformations useful. Using Asyncifier, we applied and reported refactorings in 10 apps. 9 replied and accepted each one of our 28 refactorings. Using Corrector, we found and reported misuses in 19 apps. 18 replied and accepted each of our 285 patches. Outreach: Because developers learn new language constructs through both positive and negative examples, we designed a website, http://LearnAsync.NET/, to show hundreds of such usages of asynchronous idioms and async/await keywords. 2. BACKGROUND When a button click event handler executes a synchronous long-running CPU-bound or blocking I/O operation, the user interface will freeze because the UI event thread cannot respond to events. Code listing 1 shows an example of such an event handler, method Button_Click. It uses the GetFromUrl method to download the contents of a URL, and place it in a text box. Because GetFromUrl is waiting for the network operation to complete, the UI event thread is blocked, and the UI is unresponsive. Keeping UIs responsive thus means keeping the UI event thread free of those long-running or blocking operations. If these operations are executed asynchronously in the background, the foreground UI event thread does not have to busy-wait for completion of the operations. That frees the UI event thread to respond to user input, or redraw the UI: the user will experience the UI to be responsive. CPU-bound operations can be executed asynchronously by (i) explicitly creating threads, or (ii) by reusing a thread from the thread pool. I/O operations are more complicated to offload asynchronously. The naive approach would be to just start another thread to run the synchronous operation asynchronously, using the same mechanics as used for CPU-bound code. However, that would still block the new thread, which consumes significant resources, hurting scalability. The solution is to use asynchronous APIs provided by the platform. The .NET framework mainly provides two models for asynchronous programming: (1) the Asynchronous Programming Model (APM), that uses callbacks, and (2) the Task Asyncronous Pattern (TAP), that uses Tasks, which are similar to the concept of futures found in many other languages such as Java, Scala or Python. 2.1 Asynchronous Programming Model APM, the Asynchronous Programming Model, was part of the first version of the .NET framework, and has been in existence for 10 years. APM asynchronous operations are started with a Begin method invocation. The result is obtained with an End method invocation. In Code listing 2, BeginGetResponse is such a Begin method, and EndGetResponse is an End method. BeginGetResponse is used to initiate an asynchronous HTTP GET request. The .NET framework starts the I/O operation in the background (in this case, sending the request to the remote web server). Control is returned to the calling method, which can then continue to do something else. When the server responds, the .NET framework will “call back” to the application to notify that the response is ready. EndGetResponse is then used in the callback code to retrieve the actual result of the operation. See Figure 1 for an illustration of this flow of events. The APM Begin method has two pattern-related parameters. The first parameter is the callback delegate (which is a managed, type-safe equivalent of a function pointer). It can be defined as either a method reference, or a lambda expression. The second parameter allows the developer to pass any single object reference to the callback, and is called state. The .NET framework will execute the callback delegate on the thread pool once the asynchronous background operation completes. The EndGetResponse method is then used in the callback to obtain the result of the operation, the actual WebResponse. Note a subtle difference between the synchronous, sequential example in Code listing 1 and the asynchronous, APM- 2.2 Task-based Asynchronous Pattern TAP, the Task-based Asynchronous Pattern, provides for a slightly different approach. TAP methods have the same base operation name as APM methods, without ‘Begin’ or ‘End’ prefixes, and instead have an ‘Async’ suffix. The API consists of methods that start the background operation and return a Task object. The Task represents the operation in progress, and its future result. The Task can be (1) queried for the status of the operation, (2) synchronized upon to wait for the result of the operation, or (3) set up with a continuation that resumes in the background when the task completes (similar to the callbacks in the APM model). 2.3 Drawbacks of APM and plain TAP Using APM and plain TAP directly has two main drawbacks. First, the code that must be executed after the asynchronous operation is finished, must be passed explicitly to the Begin method invocation. For APM, even more scaffolding is required: The End method must be called, and that usually requires the explicit passing and casting of an ‘async state’ object instance - see Code listing 2, lines 9-10. Second, even though the Begin method might be called from the UI event thread, the callback code is executed on a thread pool thread. To update the UI after completion of the asynchronous operation from the thread pool thread, an event must be sent to the UI event thread explicitly - see Code listing 2, line 13-15. 2.4 Pause’n’play with async & await To solve this problem, the async and await keywords have been introduced. When a method has the async keyword modifier in its signature, the await keyword can be used to define pausing points. When a Task is awaited in an await expression, the current method is paused and control is returned to the caller. When the await’d Task’s background operation is completed, the method is resumed from right after the await expression. Code listing 3 shows the TAP- & async/await-based equivalent of Code listing 2, and Figure 2 illustrates its flow of execution. The code following the await expression can be considered a continuation of the method, exactly like the callback that needs to be supplied explicitly when using APM or plain TAP. Methods that have the async modifier will thus run synchronously up to the first await expression (and if it does not have any, it will complete synchronously). Merely adding the async modifier does not magically make a method be asynchronously executed in the background. 2.5 Where is the code executing? There is one important difference between async/await continuations, and APM or plain TAP callback continuations: APM and plain TAP always execute the callback on a thread pool thread. The programmer needs to explicitly schedule a UI event to interface with the UI, as shown in Code listing 2 and Figure 1. In async/await continuations, the await keyword, by default, captures information about the thread in which it is executed. This captured context is used to schedule execution of the rest of the method in the same context as when the asynchronous operation was called. For example, if the **A Study and Toolkit for Asynchronous Programming in C#** **Research Questions** We are interested in assessing the usage of state of the art asynchronous programming in real world WP apps. ### 3.1 Methodology **Corpus of Data:** We chose Microsoft’s CodePlex [11] and GitHub [19] as sources of the code corpus of WP apps. According to a recent study [27], most C# apps reside in these two repositories. We developed WPCollector to create our code corpus. It is available online [10] for reuse by other researchers. We used WPCollector to download all recently updated WP apps which have a WP-related signature in their project files. It ignores (1) apps without commits since 2012, and (2) apps with less than 500 non-comment, non-blank lines of code (SLOC). The latter “toy apps” are not representative of production code. WPCollector makes as many projects compilable as possible (e.g. by resolving-installing dependencies), because the Roslyn APIs that we rely on (see Analysis Infrastructure) require compilable source code. WPCollector successfully downloaded and prepared 1378 apps, comprising 12M SLOC, produced by 3376 developers, which we all used in our analysis, without sampling. WP7, released in October 2010, is targeted by 1115 apps. WP8, released in October 2012, is targeted by 349 apps. 86 apps target both platforms, because WP8 apps cannot run on WP7 devices. **Analysis Infrastructure:** We developed AsyncAnalyzer to perform the static analysis of asynchronous programming construct usage. We used Microsoft’s recently released Roslyn [31] SDK, which provides an API for syntactic and semantic program analysis, AST transformations and editor services in Visual Studio. Because the publicly available version of Roslyn is incomplete and does not support the `async/await` keywords yet, we used an internal build obtained from Microsoft. We executed AsyncAnalyzer over each app in our corpus. The `await` keyword is encountered in the UI event thread, it will capture that fact. Once the background operation is completed, the continuation of the rest of the method is scheduled back onto the UI event thread. This behavior allows the developer to write asynchronous code in a sequential manner. See Code listing 3 for an example. Comparing the code examples in listings 1 and 3 will show that the responsive version based on TAP & `async/await` only slightly differs from the sequential version. It is readable in a similar fashion, and even the UI update (setting the contents of the text box) is back at its original place. By default, `await` expressions capture the current context. However, it is not always needed to make the expensive context switch back to the original context. To forestall a context switch, an `await`ed `Task` can be set to ignore capturing the current context by using `ConfigureAwait(false)`. In Code listing 3, in `GetFromUrlAsync`, none of the statements following the `await` expressions require access to the UI. Hence, the `await`ed `Task` is set with `ConfigureAwait(false)`. In `Button_Click`, the statement following `await GetFromUrlAsync(url)` does need to update the UI. So that `await` expression should capture the original context, and the task should not be set up with `ConfigureAwait(false). ### 3.2 How do developers use asynchronous programming? **Asynchronous APIs:** We detected all APM and TAP methods that are used in our code corpus as shown in Table 1. Because in WP7 apps, TAP methods are only accessible via additional libraries, Table 1 tabulates the usage statistics for WP7 and WP8 apps separately. The data shows that APM is more popular than TAP for both WP7 and WP8. We also manually inspected all APM and TAP methods used and categorized them based on the type of I/O operations: network (1012), file system (310), database (145), user interaction (102) and other I/O (e.g. speech recognition) (68). We found that asynchronous operations are most commonly used for network operations. There are two ways to offload CPU-bound operations to another thread: by creating a new thread, or by reusing threads from the thread pool. Based on C# books and references [1], we distinguish three different approaches developers use to access the thread pool: (1) the BackgroundWorker class, (2) accessing the ThreadPool directly, and (3) creating `Task`s. Table 1 tabulates the usage statistics of all these approaches. Because `Task` is only available in WP7 apps by using additional libraries, the table shows separate statistics for WP7 and WP8 apps. The data shows that `Task` is used significantly more in WP8 apps, most likely because of availability in the core platform. **Language Constructs:** `async/await` have become accessible for WP development in last quarter of 2012. While they are available by default in WP8, WP7 apps have to use the reference `Microsoft.Bcl.Async` library to use them. We found that 45% (157) of WP8 apps use `async/await` keywords. While nearly half of all WP8 apps have started to use `async/await`, only 10 WP7 apps use them. For each of these apps, it inspects the version from the main development branch as of August 1st, 2013. We developed a specific analysis to answer each research question. ### Table 1: Usage of asynchronous idioms. The three columns per platform show the total number of idiom instances, the total number of apps with instances of the idiom, and the percentage of apps with instances of the idiom. <table> <thead> <tr> <th></th> <th>WP7</th> <th>WP8</th> </tr> </thead> <tbody> <tr> <td>I/O APM</td> <td>1028</td> <td>242</td> </tr> <tr> <td>I/O TAP</td> <td>123</td> <td>23</td> </tr> <tr> <td>New Thread</td> <td>183</td> <td>92</td> </tr> <tr> <td>BG Worker</td> <td>149</td> <td>73</td> </tr> <tr> <td>ThreadPool</td> <td>386</td> <td>103</td> </tr> <tr> <td>New Task</td> <td>51</td> <td>11</td> </tr> </tbody> </table> Callback-based APM is the most widely used idiom. While nearly half of all WP8 apps have started to use `async/await`, only 10 WP7 apps use them. 3.3 Do developers misuse async/await? Because `async/await` are relatively new language constructs, we have also investigated how developers misuse these constructs. We define misuse as anti-patterns which hurt performance and might cause serious problems like deadlocks. We detected the following typical misuse idioms. 3.3.1 Fire & Forget methods 799 of 2382 `async/await` methods are “fire&forget”, which return `void`. Unless a method is only called as a UI event handler, it must be available. Otherwise, it is a code smell because it complicates control flow and makes error detection & correction difficult. Exceptions in fire&forget methods cannot be caught in the calling method, causing termination of the app. Instead, they should return `Task` which does not force the method to return anything; but it enables easier error-handling, composability, and testability. However, we found that only 339 out of these 799 `async` void methods are event handlers. It means that 19% of all `async` methods (400 out of 2383) are not following this important practice [22]. One in five `async` methods violate the principle that an `async` method should be awaitable unless it is the top level event handler. 3.3.2 Unnecessary `async/await` methods Consider the example from “Cimbalino Windows Phone Toolkit” [3]: ```csharp public async Task<Stream> OpenFileForReadAsync(...) { return await Storage.OpenStreamForReadAsync(path); } ``` The `OpenStream` method is a TAP call, which is awaited in the `OpenFile` method. However, there is no need to await it. Because there is no statement after the `await` expression except for the return, the method is paused without reason: the `Task` that is returned by `Storage.OpenStream` can be immediately returned to the caller. The snippet below behaves exactly the same as the one above: ```csharp public Task<Stream> OpenFileForReadAsync(...) { return Storage.OpenStreamForReadAsync(path); } ``` It is important to detect this kind of misuse. Adding the `async` modifier comes at a price: the compiler generates some code in every `async` method. We discovered that in 26% of the 167 apps, 324 out of all 2383 methods unnecessarily use `async/await`. There is no need to use `async/await` in 14% of all `async` methods. 3.3.3 Long-running operations under `async` methods We also noticed that developers use some potentially long running operations under `async` methods even though there are corresponding asynchronous versions of these methods in .NET or third-party libraries. Consider the following example from indulged-flickr [15], a Flickr: ```csharp var response = await DispatchRequest(...); using (StreamReader reader = new StreamReader(...)) { string jsonResponse = reader.ReadToEnd(); } ``` The developer might use `await ReadToEndAsync()` instead of the synchronous `ReadToEnd` call, especially if the stream is expected to be large. In the example below from iRacerMotionControl [23], the situation is more severe. ```csharp private async void BT2Arduino_Send(string WhatToSend) { await BTSock.OutputStream.WriteAsync(datab); txtBTStatus.Text = "sent"; System.Threading.Thread.Sleep(5000); ... } ``` The UI event thread calls `BT2Arduino_Send`, which blocks the UI thread by busy-waiting for 5 seconds. Instead of using the blocking `Thread.Sleep` method, the developer should use the non-blocking `Task.Delay(5000)` method to preserve similar timing behavior, and `await` it to prevent the UI to freeze for 5 seconds. We found 115 instances of potentially long-running operations in 22% of the 167 apps that use `async/await`. 1 out of 5 apps miss opportunities in at least one `async` method to increase asynchronicity. 3.3.4 Unnecessarily capturing context `async/await` introduce new risks if the context is captured without specifying `ConfigureAwait(false)`. For example, consider the following example from adsclient [2]: ```csharp void GetMessage(byte[] response) { ... ReceiveAsync(response).Wait(); ... } ``` If `GetMessage` is called from the UI event thread, the thread will wait for completion of `ReceiveAsync` because of the `Wait` call. When the await completes in `ReceiveAsync`, it attempts to execute the remainder of the method within the captured context, which is the UI event thread. However, the UI event thread is already blocked, waiting for the completion of `ReceiveAsync`. Therefore, a deadlock occurs. To prevent the deadlock, the developer needs to set up the `await` expression to use `ConfigureAwait(false)`. Instead of attempting to resume the `ReceiveAsync` method on the UI event thread, it now resumes on the thread pool, and the blocking wait in `GetMessage` does not cause a deadlock any more. In the example above, although `ConfigureAwait(false)` is a solution, we fixed it by removing `await` because it was also an instance of unnecessary `async/await` use. The developer of the app accepted our fix as a patch. We found 5 different cases for this type of deadlock which can happen if the caller method executes on UI event thread. Capturing the context can also cause another problem: it hurts performance. As asynchronous GUI applications grow larger, there can be many small parts of `async` methods all using the UI event thread as their context. This can cause sluggishness as responsiveness suffers from thousands of paper cuts. It also enables a small amount of parallelism: some asynchronous code can run in parallel with the UI event thread instead of constantly badgering it with bits of work to do. To mitigate these problems, developers should await the `Task with ConfigureAwait(false)` whenever they can. If the statements after the `await` expression do not update the UI, ConfigureAwait(false) must be set. Detecting this misuse is important because using ConfigureAwait(false) might prevent future bugs like deadlocks and improve the performance. 1786 out of 2383 async methods do not update GUI elements in their call graph after await expressions. We found that ConfigureAwait(false) is used in only 16 out of these 1786 async methods in await expressions. All 1770 other async methods should have used ConfigureAwait(false). 99% of the time, developers did not use ConfigureAwait(false) where this was needed. <table> <thead> <tr> <th>Misuse</th> <th>#</th> <th>Method%</th> <th>App%</th> </tr> </thead> <tbody> <tr> <td>(1) Unnec. Forget</td> <td>460</td> <td>19%</td> <td>76%</td> </tr> <tr> <td>(2) Unnec. Async</td> <td>324</td> <td>14%</td> <td>26%</td> </tr> <tr> <td>(3) Potential LongRunning</td> <td>115</td> <td>5%</td> <td>22%</td> </tr> <tr> <td>(4) Unnec. Context</td> <td>1770</td> <td>74%</td> <td>86%</td> </tr> </tbody> </table> 4. TOOLKIT Based on our findings, we developed a two-fold approach to support the developer: (1) Asyncifier, a refactoring tool to upgrade legacy callback-based APM code to take advantage of async/await construct (see section 4.1) and (2) Corrector, a tool for detecting and fixing misuses of async/await in code (see Section 4.2). Asyncifier helps the developer in two ways: (1) the code is upgraded without errors, retaining original behavior, and (2) it shows how correctly to use the async/await keywords in production code. If the developer manually introduces async/await, Corrector will help in finding and removing misuses. 4.1 Refactoring APM to async & await 4.1.1 Challenges There are three main challenges that make it hard to execute the refactoring quickly and flawlessly by hand. First, the developer needs to understand if the APM instance is a candidate for refactoring based on the preconditions in Section 4.1.2. Second, he must transform the code while retaining the original behavior of the code - both functionally and in terms of scheduling. This is non-trivial, especially in the presence of (1) exception handling, and (2) APM End methods that are placed deeper in the call graph. Exception handling The refactoring from APM to async/await should retain the functional behavior of the original program, both in the normal case and under exceptional circumstances. In 52% of all APM instances, try-catch blocks are in place to handle those exceptions. The try-catch blocks surround the End method invocation, which throws an exception if the background operation results in an exceptional circumstance. These catch blocks can contain business logic: for example, a network error sometimes needs to be reported to the user (“Please check the data or WiFi connection”). Code listing 4 shows such an example. The naive approach to introducing async/await is to replace the Begin method invocation with an invocation to the corresponding TAP method, and await the result immediately. However, the await expression is the site that can throw the exception when the background operation failed. Thus, the exception would be thrown at a different site, and this can drastically change behavior. By introducing the await expression as replacement of the End method call at the exact same place, existing exception handling will work exactly as it did before. This is not a non-trivial insight for developers, because online examples of async/await only show the refactoring for extremely simple cases, where this is not a concern. Hidden End methods The developer needs to take even more care when the End method is not immediately called in the callback lambda expression, but is ‘hidden’ deeper down the call chain. In that case, the Task instance must be passed down to where the End method invocation was to retain exceptional behavior. This requires an inter-procedural analysis of the code: each of the methods, through which the IAsyncResult ‘flows’, must be refactored, which makes the refactoring more tedious. The developer must trace the call graph of the callback to find the End method call, and in each encountered method: (1) replace the IAsyncResult parameter with a Task<T> parameter (with T being the return type of the TAP method, (2) replace the return type R with async Task<R>, or void with async void or async Task, and (3) introduce ConfigureAwait(false) at each await expression. As shown in the results of the empirical study, when its presence is critical to retain UI responsiveness, developers almost never use ConfigureAwait(false) where it should be used. Code listing 5 shows such an example. 4.1.2 Algorithm precondition An invocation of a Begin method is a candidate for refactoring to async/await based constructs, if it adheres to the following preconditions and restrictions: ```csharp void Button_Click(...) { WebRequest request = WebRequest.Create(url); request.BeginGetResponse(CompletionCallback, request); } ``` Code 6 Adheres to precondition ```csharp void Action(WebRequest request) { request.BeginGetResponse(asyncResult => { var response = request.EndGetResponse(asyncResult); // Do something with response. }, null); } ``` Code 7 Code listing 2 refactored to meet preconditions ```csharp void GetFromUrl(string url) { var request = WebRequest.Create(url); Callback(asyncResult, request); } void Callback(IAsyncResult ar, WebRequest request) { var content = stream.ReadAsString(); Dispatcher.BeginInvoke(() => { textBox.Text = content; }); } ``` P1: The APM method call must represent an asynchronous operation for which a TAP-based method also exists. Obviously, if the TAP-based method does not exist, the code cannot be refactored. P2: The Begin method invocation statement must be contained in a regular method, i.e., not in a lambda expression or delegate anonymous method. The Begin method will be made async. While it is possible to make lambdas and delegate anonymous methods async, this is considered a bad practice because it usually creates an async void fire & forget method. P3: The callback argument must be a lambda expression with a body consisting of a block of statements. The call graph of that block must contain an End method invocation that takes the lambda IAsyncResult parameter as argument. This means that the callback must actually end the background operation. P4: In the callback call graph, the IAsyncResult lambda parameter should not be used, except as argument to the End method. P5: The state argument must be a null literal. As the IAsyncResult lambda parameter must be unused, its AsyncState property should be unused as well, so the state argument expression of the Begin method invocation should be null. P6: In the initiating method (the method containing the Begin method invocation), the IAsyncResult return value of the Begin method should not be used, because it is returned by a method invocation that will disappear. Code listing 6 shows a valid example in the context of these preconditions. Applying these preconditions to APM instances in real-world applications would restrict the number of APM instances that can be refactored. Fortunately, many instances in other forms can be refactored into this form. Code listing 2 shows an example that fails P3 and P5: the callback argument is a method reference, and the state argument is not null. This instance can be refactored into the code shown in listing 7 by applying the “Introduce Parameter” refactoring to the request variable in the original Callback method. Based on encountered cases in the analyzed code corpus, we have identified and (partially) implemented several such refactorings in Asyncifier. Examples are (1) identification of unused state arguments which can be replaced with null (solves violations of P5), and (2) rewriting of some callback argument expressions (solves violations of P3). 4.1.3 Refactoring APM instances Asyncifier detects all Begin method invocations that fulfill the preconditions. It takes the following steps to refactor the APM instance to async/await-based constructs. **Traveling the call graph from APM Begin to End** First, Asyncifier explores the call graph of the body of the callbacks lambda expression to find the invocation path to the End invocation. It does a depth-first search of the call graph, by looking up the symbols of any non-virtual method that is encountered. There are two possible scenarios: the End method invocation (1) is placed directly in the lambda expression, or (2) it is found on the call graph of the lambda body in another method’s body. Code listing 6 is an example of the first case. In the second case, Asyncifier identifies three different methods which are on the call graph path: (1) the initiating method, the method containing the Begin method invocation, (2) the result-obtaining method, the method containing the End method invocation, and (3) intermediate methods, the remaining methods on the path. Code listing 7 is an example of the second case. This example is used in the description of the following steps. **Rewriting the initiating method** In both cases, the initiating method needs to be rewritten. Asyncifier adds the async modifier to the signature of the initiating method. It changes the return value is to either Task instead of void, or Task<T> for any other return type T. ```csharp async Task GetFromUrl(string url) { ... } ``` Asyncifier replaces the Begin method invocation statement with a local variable declaration of a task that is assigned the result of the corresponding TAP method invocation. The parameterized type is the return type of the End method: ```csharp request.BeginGetResponse(); ``` ```csharp Task<TWebResponse> task = request.GetResponseAsync(); ``` It then concatenates the statements in the lambda expression body to the body of the initiating method: ```csharp async Task GetFromUrl(string url) { ... } ``` It replaces the asyncResult lambda parameter reference asyncResult with a reference to the newly declared Task instance. ```csharp async Task GetFromUrl(string url) { ... } ``` Rewriting the result-obtaining method Asyncifier updates the signature of the result-obtaining method as follows: (1) it adds the async modifier, (2) it replaces return type void with Task, or any other T with Task<T>, and (3) it replaces the AsyncCallback parameter with Task<T>, with R the return type of the End method. ```csharp async Task Callback(Task<WebResponse> task , WebRequest request) { var response = await task.ConfigureAwait(false); var stream = response.GetResponseStream(); var content = stream.ReadAsStringAsync(); textBox.Text = content; } ``` Rewriting intermediate methods Intermediate methods must be rewritten if the End method is not invoked in the callback lambda expression body. Asyncifier recursively refactors every method recursively, applying the same steps as for the result-obtaining method. Additionally, at the call site of each method, the reference to the (removed) result parameter is replaced with a reference to the (newly introduced) task parameter. 4.1.4 Retaining original behavior It is crucial that the refactored code has the same behavior in terms of scheduling as the original code. With both the Begin method and the TAP method, the asynchronous operation is started. In the APM case, the callback is only executed once the background operation is completed. With async/await, the same happens-before relationship exists between the await expression and the statements that follow the await of the Task returned by the TAP method. Because the statements in callbacks are placed after the await expression that pauses execution until completion of the background operation, this timing behavior is preserved. 4.1.5 Implementation limitations The set of candidates is restricted by tool limitations related to re-use of Begin or End methods. First, there should not be other call graph paths leading from Begin method call to the target End method, which means so much as that the specific End method invocation must not be shared between multiple Begin invocations. Second, recursion in the callback through another Begin call that references the same callback again is not allowed (essentially, this is also sharing of an End method call). Third, Asyncifier does not support multiple End method invocations that correspond to a single Begin method invocation, for example through the use of branching. However, this case is very rare. 4.2 Corrector We implemented another tool, Corrector, that detects and corrects common misuses that we explained in RQ4. Corrector gets the project file as an input and automatically corrects the misuses if it finds any without user Al-though this batch mode works to fix present misuses, it does not prevent users to make mistakes. Hence, Corrector also supports Quick Fix mode for Visual Studio. This mode shows a small icon close to the location of the misuse and offers a transformation to fix the problem, similar to the one in Eclipse. 1. Fire & Forget methods: There is no fix that can be automated for this misuse. If fire & forget method is converted to async Task method and is awaited in the caller, it will change the semantics. Therefore, the developer’s understanding of code is required to fix these cases. 2. Unnecessary async/await methods: Corrector checks whether async method body has only one await keyword and this await is used for a TAP method call that is the last statement of the method. Corrector does not do this for async void (fire&forget) methods; because if it removes await from the last statement in async void methods, it will silence the exception that can occur in that statement. To fix these cases, Corrector removes the async from the method identifiers and the await keyword from the TAP method call. The method will return the Task that is the result of TAP method call as shown in the examples of RQ4. 3. Long-running operations under async methods: To detect these operations, Corrector looks up symbols of each method invocation in the bodies of async methods. After getting symbol information, Corrector looks at the other members of the containing class of that symbol to check whether there is an asynchronous version. For instance, if there is an x.Read() method invocation and x is an instance of the Stream class, Corrector looks at the members of the Stream class to see whether there is a ReadAsync method that gets the same parameters and returns Task. By dynamically checking the members, Corrector can also find asynchronous versions not only in the .NET framework but also in third-party libraries. Corrector also maps corresponding blocking and non-blocking methods which do not follow the async suffix convention (e.g. Thread.Sleep -> Task.Delay). Corrector avoids introducing asynchronous operations of file IO operations in loops, as this could result in slower performance than the synchronous version. After finding the corresponding non-blocking operation, Asyncifier simply replaces the invocation with the new operation and makes it awaited. (4) Unnecessarily capturing context: Corrector checks whether there is a statement that access a GUI element (read or write) in the call graph of async method. It inspects every object's symbol if the symbol is from System.Windows or Microsoft.Phone namespaces. All GUI elements are in these namespaces; but all constructs in these namespaces are not GUI elements. It makes our analysis conservative. If Corrector does not find any GUI element access after await points in async methods, it simply puts ConfigureAwait(false) as following TAP calls. Even though it is enough to put ConfigureAwait for one TAP call in the async method, it is good practice to put it for every TAP call in the async methods. 5. EVALUATION 5.1 Quantitative To evaluate the usefulness of Asyncifier and Corrector we answer the following questions by using our code corpus: EQ1: Are they applicable? We executed Asyncifier over our code corpus. After each transformation, Asyncifier compiled the app in-memory and checked whether compilation errors were introduced. 54% of the 1245 APM instances adhere to the preconditions set in section 4.1.2, which were all successfully refactored. By manually checking 10% of all transformed instances, randomly sampled, we verified that Asyncifier refactor APM instances correctly. In the 46% of unsupported APM instances, Asyncifier does not touch the original program. The two main causes for unsuccessful refactorings are (1) instances that do not adhere to preconditions, and (2) tool limitations. The former consist mostly of instances that can not be refactored because of fundamental limitations of the algorithm. Examples are callback expressions that reference a field delegate, or APM End methods that are hidden behind interface implementations (both violations of precondition P3). The latter consist of the examples given in section 4.1.5. We also applied Corrector to the full corpus. All instances of type 2, 3, and 4 misuses were corrected automatically. EQ2: What is the impact of refactoring on code? Asyncifier touches 28.9 lines on average per refactoring. It shows that these refactorings need automation support because they touch many lines of code. Corrector touches one line per each misuse of type (3) and (4) in Section 4.2. It touches 2 or 3 lines per each misuse of type (2); 2.1 lines on average. EQ3: Is the tool efficient? For Asyncifier, the average time needed to refactor one instance is 508ms rendering Asyncifier suitable for an interactive refactoring mode in an IDE. Because the detection and fixing of type (2) and (3) misuses is straightforward, we did not measure the execution time. However, detecting type (4) misuse is expensive, as it requires inspection of the call graph of the async method. We found that analyzing one async method for this misuse takes on average 47ms. This shows that Corrector can be used interactively in an IDE, even for type (4) misuse. 5.2 Qualitative evaluation To further evaluate the usefulness in practice, we identified the 10 most recently updated apps that have APM instances. We applied Asyncifier ourselves, and offered the modifications to the original developers as a patch via a pull request. 9 out of 10 developers responded, and accepted each one of our 28 refactorings. We received very positive feedback on these pull requests. One developer would like to have the tool available right now: “I’ll look forward to the release of that refactoring tool, it seems to be really useful.” [33] The developer of phonegui-tartab [4] said that he had “been thinking about replacing all asynchronous calls [with] new async/await style code". This illustrates the demand for tool support for the refactoring from APM to async/await. For Corrector, we selected the 10 most recently updated apps for all type (2) and (3) misuses. We did not especially select 10 apps for the type (4) misuse; but Corrector did fix this misuse in the selected apps. In total, we selected 19 apps because one app had both type (2) and (3). Developers of 18 apps replied and accepted our all patches, corresponding to 149 instances of type (2), 38 instances of type (3), and 98 instances of type (4) misuses. In total 18 apps accepted 285 instances of Corrector transformation. Response to the fixes that removed unnecessary async/await keywords was similarly positive. One developer pointed out that he missed several unnecessary async/await instances that Corrector detected: “[…] I normally try to take the same minimizing approach, though it seems I missed these.” [32] The developer of SoftbuildData [6] experienced performance improvements after removing unnecessary async/await: “[…] performance has been improved to 28 milliseconds from 49 milliseconds." Again, these illustrate the need for tools that support the developer in finding problems in the use of async/await. Furthermore, the developer of the playerframework [5] said that they missed the misuses because the particular code was ported from old asynchronous idioms. It demonstrates the need for Asyncifier as it can help a developer to upgrade his or her code, without introducing incorrect usage of async/await. 6. DISCUSSION 6.1 Implications Our study has practical implications for developers, researchers, and language and library designers. Developers learn a new programming construct through both positive and negative examples. Robillard and DeLine [35] study what makes large APIs hard to learn and conclude that one of the important factors is the lack of usage examples. We provide hundreds of real-world examples of all asynchronous idioms on http://LearnAsync.net/. Because developers might need to inspect the whole source file or project to understand the example, we also link to highlighted source files on GitHub [39]. We also provide negative examples anonymously, without giving app names. Language and library designers can learn which constructs and idioms are embraced by developers, and which ones are tedious to use or error-prone. Because some other major languages have plans to introduce similar constructs for asynchronous programming, this first study can guide them to an improved design of similar language constructs for their languages. For instance, capturing the context might not be the default: developers are very likely to forget to use ConfigureAwait(false). Tools vendors can take advantage of our findings on async/await misuse. IDEs such as Visual Studio should have built-in quick fixes (similar to ours) to prevent users from introducing misuse. For instance, if developers introduce a fire & forget method, the IDE should give a warning unless the method is the top level event handler. Researchers in the refactoring community can use our findings to target future research. For example, as we see from Table 1, the usage of Task jumped to 8% from 1% in WP8. This calls for work on a tool that converts old asynchronous idioms of CPU-bound computations (e.g. Thread) to new idioms (e.g. Task). 6.2 Threats to Validity Internal: Is there something inherent to how we collect and analyze the usage that could skew the accuracy of our results? First, the study is only focusing on static usage of asynchronous constructs, but one use of a construct (i.e., a call site) could correspond to a large percentage of execution time, making it a very asynchronous program. Likewise, the opposite could be true. However, we are interested in the developer’s view of writing, understanding, maintaining, evolving the code, not on the performance tools’ view of the code (i.e., how much of the total running time is spent in asynchronous code). For our purposes, static analysis is much more appropriate. External: Are the results representative? First, despite the fact that our corpus contains only open source apps, the 1378 apps span a wide domain, from games, social networking, and office productivity to image processing and third party libraries. They are developed by different teams with 3376 contributors from a large and varied community. Our code corpus contains all windows phone apps from GitHub and Codeplex without doing any random sampling or selection. While we answer our research questions for the Windows Phone ecosystem, we expect they can cross the boundary from mobile to any platform written in C# (e.g. desktop, web). Asynchronous programming is similar on those platforms: developers have access to the same async/await language constructs, and similar APIs. Reliability: Are our empirical study and evaluation reliable? A detailed description of our results with fine-grained reports are available online. Because we used an internal version of Microsoft’s Roslyn, we had to sign an NDA, which prohibits us from releasing the binaries of any tool using it (AsyncANalyzer, Asyncifier, and Corrector). We will be able to publish the tools based on a public release that we expect by late Fall '13. 6.3 Future Work Our study was limited to apps targeting the Windows Phone platform. However, we believe that the tools can also be used for apps targeting other C# platforms, such as desktop, web (ASP.NET) and console apps. Future work would entail a study of asynchronous programming on those platforms similar to the one presented in this paper. The refactoring tool that replaces APM instances with async/await-based code has several limitations, as mentioned in section 4.1.5. We plan to remove those limitations, and we expect to be able to show that the success rate of the refactoring tool will increase to 65%. As soon as there is a publicly available version of Roslyn, we plan to update and release all the now-unreleased tools. 7. RELATED WORK Empirical Studies: There are several empirical studies [9, 20, 26, 29] on the usage of libraries or programing language constructs. To the best of our knowledge, there is no empirical study on asynchronous constructs and language constructs for asynchronous programming. We have previously conducted an empirical study [28] on how developers from thousands of open source projects use Microsoft’s Parallel Libraries. There is a small intersection between asynchronous and parallel libraries: only Thread, Task, and ThreadPool constructs. In this paper, we studied these three constructs as 3 of 5 different approaches for asynchronous CPU-bound computations. Refactoring Tools: Traditionally, refactoring tools have been used to improve the design of sequential programs. There are a few refactoring tools that specifically target concurrency. We have used refactoring [13, 14] to retrofit parallelism into sequential applications via concurrent libraries. In the same spirit, Włoka et al. present a refactoring for replacing global state with thread local state [40]. Schafer et al. present Relocker [36], a refactoring tool that lets programmers replace usages of Java built-in locks with more flexible locks. Gyori et al. present Lambdaficator [21], that refactors existing Java code to use lambda expressions to enable parallelism. To the best of our knowledge, there is no refactoring tool that specifically targets asynchronous programming. In industry, ReSharper is a well-known refactoring tool, but it does not support async/await-specific refactorings [34]. Our refactoring helps developer design responsive apps, which is the area never explored so far [12]. 8. CONCLUSION Because responsiveness is very important on mobile devices, asynchronous programming is already a first-class citizen in modern programming environments. However, the empirical research community and tool vendors have not yet similarly embraced it. Our large-scale empirical study of Windows Phone apps provides insight into how developers use asynchronous programming. We have discovered that developers make many mistakes when manually introducing asynchronous programming based on the modern C# language features async/await. We provide a toolkit to support developers in preventing and curing these mistakes. Our toolkit (1) safely refactors legacy callback-based asynchronous code to async/await, (2) detects and fixes existing errors, and (3) prevents introduction of new errors. Evaluation of the toolkit shows that it is highly applicable, and developers already find the transformations very useful and are looking forward to using our toolkit. We hope that our study motivates other follow-up studies to fully understand the state of the art in asynchronous programming. 9. REFERENCES [38] Don Syme, Tomas Petricke, and Dmitry Lomov. The F# asynchronous programming model. In Proceedings of the 13th international conference on Practical
{"Source-Url": "https://pure.tudelft.nl/portal/files/7392377/TUD_SERG_2013_016.pdf", "len_cl100k_base": 11593, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 47791, "total-output-tokens": 14547, "length": "2e13", "weborganizer": {"__label__adult": 0.0004112720489501953, "__label__art_design": 0.00023818016052246096, "__label__crime_law": 0.0002727508544921875, "__label__education_jobs": 0.0006365776062011719, "__label__entertainment": 5.441904067993164e-05, "__label__fashion_beauty": 0.0001519918441772461, "__label__finance_business": 0.00013148784637451172, "__label__food_dining": 0.0002989768981933594, "__label__games": 0.0004954338073730469, "__label__hardware": 0.000499725341796875, "__label__health": 0.0003211498260498047, "__label__history": 0.00016999244689941406, "__label__home_hobbies": 6.628036499023438e-05, "__label__industrial": 0.00021076202392578125, "__label__literature": 0.00023090839385986328, "__label__politics": 0.00022518634796142575, "__label__religion": 0.0003962516784667969, "__label__science_tech": 0.001972198486328125, "__label__social_life": 8.195638656616211e-05, "__label__software": 0.003204345703125, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0002894401550292969, "__label__transportation": 0.0004086494445800781, "__label__travel": 0.0001856088638305664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59846, 0.03518]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59846, 0.24676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59846, 0.88509]], "google_gemma-3-12b-it_contains_pii": [[0, 771, false], [771, 915, null], [915, 915, null], [915, 5732, null], [5732, 11686, null], [11686, 14798, null], [14798, 20751, null], [20751, 26501, null], [26501, 31420, null], [31420, 36605, null], [36605, 41370, null], [41370, 47610, null], [47610, 54083, null], [54083, 59287, null], [59287, 59846, null], [59846, 59846, null], [59846, 59846, null]], "google_gemma-3-12b-it_is_public_document": [[0, 771, true], [771, 915, null], [915, 915, null], [915, 5732, null], [5732, 11686, null], [11686, 14798, null], [14798, 20751, null], [20751, 26501, null], [26501, 31420, null], [31420, 36605, null], [36605, 41370, null], [41370, 47610, null], [47610, 54083, null], [54083, 59287, null], [59287, 59846, null], [59846, 59846, null], [59846, 59846, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59846, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59846, null]], "pdf_page_numbers": [[0, 771, 1], [771, 915, 2], [915, 915, 3], [915, 5732, 4], [5732, 11686, 5], [11686, 14798, 6], [14798, 20751, 7], [20751, 26501, 8], [26501, 31420, 9], [31420, 36605, 10], [36605, 41370, 11], [41370, 47610, 12], [47610, 54083, 13], [54083, 59287, 14], [59287, 59846, 15], [59846, 59846, 16], [59846, 59846, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59846, 0.03944]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
9daf3acc8dc70c66f62c5e2db3dc99294113c66b
Intel® C++ Compiler unveils compute power of Intel® Graphics Technology for general purpose computing Anoop Madhusoodhanan Prabha Agenda • Why enable Processor Graphics (GFX) for general purpose computing? • Preliminary Performance gain using GFX + CPU compute power • Offload support • GFX architecture • Memory model • Tuning applications for GFX • Software Stack for Intel® C/C++ Compiler for GFX • Intel® C++ compiler for GFX workflow • Hardware and OS platforms supported • Limitations • Call to Action Why enable GFX for general purpose computing? - 3rd generation Intel® Core™ Processors and above have significant compute power (CPU + GFX) - GFX occupies significant % of processor silicon area. - GFX offload compiler helps full utilization of silicon area on the processor. - Seamless porting experience by using Intel® Cilk™ Plus programming model. Ease of Porting to GFX Original Host code: ```c void vector_add(float *a, float *b, float *c){ for(int i = 0; i < N; i++) c[i] = a[i] + b[i]; return; } ``` Parallel Host code using Intel® Cilk™ Plus: ```c void vector_add(float *a, float *b, float *c){ cilk_for(int i = 0; i < N; i++) c[i] = a[i] + b[i]; return; } ``` Offloading function body to GFX: ```c void vector_add(float *a, float *b, float *c){ #pragma offload target(gfx) pin(a, b, c:length(N)) cilk_for(int i = 0; i < N; i++) c[i] = a[i] + b[i]; return; } ``` Creating GFX kernel for asynchronous offload: ```c __declspec(target(gfx_kernel)) void vector_add(float *a, float *b, float *c){ cilk_for(int i = 0; i < N; i++) c[i] = a[i] + b[i]; return; } ``` Preliminary Performance gain using GFX + CPU compute power Speedup in performance w.r.t CPU Left to right -> Pure GPU execution to Pure CPU execution 0 - Pure GPU execution mode , 1 - Pure CPU execution mode 0.1 – 10% workload on CPU and 90% on GPU 0.9 – 10% workload on GPU and 90% on CPU System Specification: OS : Ubuntu 12.04 (64 bit) Processor: Intel® Core™ i7-3770 @ 3.5GHz Memory : 16GB Processor Graphics SKU : GT2 Compiler version : Intel C++ Compiler 15.0 Update 1 Beta HD Driver : 16.3.2.21305 Compiler option : -std=c++11 -xAVX Intel® Software Development Tools Advanced Performance C++ and Fortran Compilers, MKL/IPP Libraries & Analysis Tools for Windows*, Linux* developers on IA based multi-core node Deep System Insights for Embedded and Mobile Developers Integrated software tool suite that provides deep system-wide insights to help: - Accelerate Time-to-Market - Strengthen System Reliability - Boost power Efficiency and Performance Synchronous Offload - Annotate data parallel code section with `#pragma offload target(gfx)` - Annotate functions invoked from the offloaded sections and global data with `__declspec(target(gfx))` - Host thread waits for the offloaded code to finish execution - Constraint - `#pragma offload target(gfx)` statement should be followed by a `cilk_for` loop - Compiler automatically generates both host side as well as GFX code Synchronous offload : Vector addition void vector_add(float *c, float *a, float *b) { #pragma offload target(gfx) pin(a, b, c:length(ARRAYSIZE)) cilk_for(int i = 0; i < ARRAYSIZE; i++) { c[i] = a[i] + b[i]; } } Quick look at Intel® Cilk™ Plus Array Notation Original code: ``` for(int i = 0; i < N; i++) { for(int j = 0; j < N; j++) { c[i][j] = a[i][j] * b[j][i]; } } ``` Array Notation version: ``` for(int i = 0; i < N; i++) { c[i][0:N] = a[i][0:N] * b[0:N][i]; } ``` Synchronous offload – Matrix Multiplication Kernel ```c void matmul_tiled(float A[][K], float B[][N], float C[][N]) { #pragma offload target(gfx) pin(A:length(M)) pin(B:length(K)) pin(C:length(M)) cilk_for (int m = 0; m < M; m += TILE_M) { cilk_for (int n = 0; n < N; n += TILE_N) { // (a) iterate tile rows in the result matrix cilk_for (int k = 0; k < K; k += TILE_K) { // (d) calculate 'dot product' of the tiles #pragma unroll atile[:,] = A[m:TILE_M][k:TILE_K]; #pragma unroll for (int tk = 0; tk < TILE_K; tk++) { btile[:] = B[k+tk][n:TILE_N]; #pragma unroll for (int tm = 0; tm < TILE_M; tm++) { ctile[tm][:] += atile[tm][tk] * btile[:]; } } #pragma unroll C[m:TILE_M][n:TILE_N] = ctile[:,]; } } } ``` 1. TILE_M=16, TILE_K=16, TILE_N=32 2. sizeof(atile) = 1KB, sizeof(btile) = 128 bytes, sizeof(ctile) = 2KB 3. Total size of local variables = 1KB + 2KB + 128 bytes = 3200 bytes < 4KB 4. These three local variables are allocated on GRF since none of the tile addresses escape (address escape means compiler cannot determine the lifetime of the local variable in all cases e.g. address of these arrays are not copied to other variables or passed as arguments) Pin clause ensures the data is available in the main memory for GPU access. cilk_for annotation for nested for loops will collapse the iteration space of the nested loops Use Intel® Cilk™ Plus Array Notation to explicitly generate vector instructions rather than traditional scalar instructions. #pragma unroll instructs the compiler to use direct register addressing which is efficient in comparison to indirect register addressing which uses address register as index register Key Points • #pragma unroll for direct GRF addressing instead of inefficient indirect register addressing. • Matrices are pinned – no data copying. The length clause is in elements – e.g. A is an M-element array of float arrays of length K. • cilk_for is used to calculate 2D tiles in parallel • btile could be 2D (btile[TILE_K][TILE_N]), but the higher dimension is reduced as it gives no extra performance but takes extra register space • compiler will generate efficient series of octal word reads to fill the atile Asynchronous Offload Support - An API based offload solution - By annotating functions with `__declspec target(gfx_kernel)` - Above annotation creates the named kernel functions (Kernel entry points) - Non-blocking API calls from CPU until the first explicit wait is specified. - GFX kernels are enqueued into an in-order gpgpu queue - Explicit control over data transfer, data decoupled from Kernel, data persistence across multiple kernel executions. - Compiler just generates the GFX code. - User to explicitly program the host version of offload section Asynchronous offload – Vector addition **Host Code** ```c float *a = new float[TOTALSIZE]; float *b = new float[TOTALSIZE]; float *c = new float[TOTALSIZE]; float *d = new float[TOTALSIZE]; a[0:TOTALSIZE] = 1; b[0:TOTALSIZE] = 1; c[0:TOTALSIZE] = 0; d[0:TOTALSIZE] = 0; _GFX_share(a, sizeof(float)*TOTALSIZE); _GFX_share(b, sizeof(float)*TOTALSIZE); _GFX_share(c, sizeof(float)*TOTALSIZE); _GFX_share(d, sizeof(float)*TOTALSIZE); _GFX_enqueue("vec_add", c, a, b, TOTALSIZE); // Non-blocking offload _GFX_enqueue("vec_add", d, c, a, TOTALSIZE); // Place next kernel in // in-order queue // wait for all tasks _GFX_wait(); _GFX_unshare(a); _GFX_unshare(b); _GFX_unshare(c); _GFX_unshare(d); ``` **GPU Code** ```c __declspec(target(gfx_kernel)) void vec_add(float *res, float *a, float *b, int size){ cilk for (int i = 0; i < size; i++){ res[i] = a[i] + b[i]; } return; } ``` Tuning tips for targeting GFX - Collapse the nested loops by annotating with cilk_for. Will increase the iteration space. More h/w threads can be put into action. - Pragma simd or Intel® Cilk™ Plus Array notation can be used to explicitly vectorize the offloaded code. - Use __restrict__ keyword and __assume_aligned() to avoid compiler creating multiple code paths. - Use pin clause to avoid the data copy overhead from DRAM to GPU memory. Enables data to be shared between CPU and GPU. - Consider using 4-byte elements rather than 1,2-bytes because gather/scatter operations of 4-byte elements are quite efficient but for 1,2-bytes elements they are much slower. Tuning tips for targeting GFX contd... • Considering SOA design over AOS for your data structures to avoid scatter/gather instructions. • Strongest feature of GPU is 4KB private memory per h/w thread. If the local variables cumulatively go beyond 4KB, the rest of the data needs to be accessed from much slower stack. • For int buf[2048] allocated on GRF, for(i=0,2048) {... buf[i] ... } will be an indexed register access. To enable direct register addressing, consider unrolling the loop. Excessive unrolling might lead to code bloat up and kernel size exceeding 250KB. • JIT compiler might still spill some registers variables to memory impacting performance. Caching in local arrays should be done for ‘hot’ data. GFX architecture (3rd generation Intel® Core™ Processors) GFX building block hierarchy: Slice/Half slice/EU/thread/vector lane - CPU Core - CPU Core - CPU Core - CPU Core - Fixed Functions: 3D & Media pipelines - Slice2 - L3$ - GFX L3 Cache (Slice1) - Sampler, Math, Data port, I$... - Hardware thread - 16 vector lanes - Half-slice - GT Last Level Cache Copyright © 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. GFX architecture highlights (3rd generation Intel® Core™ Processors) - GFX is basically Fixed Functions + EUs + Caches - EU is a general purpose highly parallel processor: - 6/8 H/W threads per EU - each thread has 4K register space (GRF – General Register File) - 16 vector lanes, 8 ops/cycle (16-wide ops take 2 cycles) - Architecture Register File (Instruction Pointer, Address register etc). - Memory model is covered in next slide. Memory model – Cache hierarchy (3rd and 4th generation Intel® Core™ Processors) Cache hierarchy - CPU L3 cache is shared between CPU and GPU. CPU L3 cache ↔ GPU last level cache (LLC) - Processor graphics comes with an L3 cache which is shared among all EUs in one slice. - EDRAM – Embedded DRAM only applicable for GT3e. - DRAM – System RAM (Global memory). - Access to DRAM from GPU is cached in either LLC or L3 or eDRAM depending on the SKU of the processor graphics (Local memory). - General Register File (GRF) of 4K per h/w thread (Register). - No L1/L2 cache for GPGPU compute. Software Stack for GFX Heterogeneous application COFF/ELF GFX .elf vISA GFX offload runtime Media Development Framework vISA JIT User mode Kernel mode Intel® HD graphics driver GT hardware This is the target code thread space partitioning arguments setup surface creation task creation and enqueue JITs vISA bytecode to native code Copyright © 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Intel® Graphics Technology compiler workflow **Compile time:** - `icc -c a.cc b.cc` ➞ `a.o` and `b.o` - `icc a.o b.o -o app` ➞ Executable has target executable embedded The target executable is extracted from fat executable using `offload_extract` tool (shipped with the compiler). Gfxobj section $ objdump -h a.o app: file format elf64-x86-64 Sections: <table> <thead> <tr> <th>Idx</th> <th>Name</th> <th>Size</th> <th>VMA</th> <th>LMA</th> <th>File off</th> <th>Algn</th> </tr> </thead> <tbody> <tr> <td>..</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>..</td> <td></td> <td>CONTENTS, ALLOC, LOAD, RELOC, READONLY, DATA</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>11</td> <td>.gfxobj</td> <td>00008f20 0000000000000000 0000000000000000 000001488</td> <td>2**0</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> CONTENTS, READONLY Native Code Generation for Processor Graphics - Initial one time JITing overhead for each offload region can be avoided. - For those customers who know their targets in advance. - Compiler option used for targeting processor graphics is -mgpu-arch=arch (on Linux*) and /Qgpu-arch:arch (on Windows*). <table> <thead> <tr> <th>ISA target value</th> <th>Intel® Core™ Processors</th> <th>Intel® Pentium® Processors</th> <th>Intel® Celeron® Processors</th> </tr> </thead> <tbody> <tr> <td>ivybridge</td> <td>3rd Generation Intel® Core™ Processors</td> <td>Processor Model Numbers: 20xx</td> <td>Processor Model Numbers: 10xx</td> </tr> <tr> <td>haswell</td> <td>4th Generation Intel® Core™ Processors</td> <td>Processor Model Numbers: 3xxx</td> <td>Processor Model Numbers: 29xx</td> </tr> </tbody> </table> Hardware support - 3rd and 4th generation Intel® Core™ Processors. These processors come with either Intel® HD Graphics, Intel® Iris™ Graphics or Intel® Iris™ Pro Graphics. - Intel® Pentium® Processors with processor model numbers 20xx and 3xxx. - Intel® Celeron® Processors with processor model numbers 10xx and 29xx. - For more information on the processors, please refer to http://ark.intel.com OS platforms supported Operating systems: • Windows* 32/64 bit. On Windows* 7 (DX9) requires an active display, batch jobs are not supported. Above restriction is relaxed in Windows* 8 and Windows Server 2012* • Linux* 64 bit • Ubuntu 12.04 (Linux kernel numbers: 3.2.0-41 for 3rd generation Intel® Core™ Processors and 3.8.0-23 for 4th generation Intel® Core™ Processors) • SLES11 SP3 (Linux kernel numbers: 3.0.76-11 for both 3rd and 4th generation Intel® Core™ Processors) • No OS X* and Android* support as of now. Limitations - **Main language restrictions** - No exceptions, RTTI, longjmp/setjmp, VLA, variable parameter list, indirect control flow (virtual functions, function pointers, indirect calls and jumps) - No shared virtual memory - No pointer or reference typed globals - No OpenMP* or Intel® Cilk™ Plus tasking - **Runtime limitations** - No ANSI C runtime library except math SVML library. - Inefficient 64-bit float and integer (due to HW limitations) - No debugger support for GFX. Call to Action - Evaluate Intel® System Studio 2015 which contains Intel® C++ Compiler 15.0 from http://registrationcenter.intel.com - Download and Install Intel® HD Graphics driver: - For Windows*, download it from http://downloadcenter.intel.com - For Linux*, download Intel® HD Graphics Drivers for Linux* from http://registrationcenter.intel.com - GFX sample code (along with tutorial) is shipped with the compiler. Try building and executing the samples. On Windows: gfx_samples.zip, On Linux: gfx_samples.tar.gz Please evaluate both Synchronous and Asynchronous offload paradigms and provide your valuable feedback on the following: 1. If this programming paradigm fits your business needs. 2. Linux OS flavors and versions on which you wish to have this offload feature. References - GFX H/W specs documentation - https://01.org/linuxgraphics/documentation - Intel® C++ Compiler 15.0 User’s Guide documentation - Intel® Cilk™ Plus Webpage – http://cilkplus.org - Getting Started with compute offload for Intel® Graphics Technology - How to offload computation to Intel® Graphics Technology - Code generation options for Intel® Graphics Technology Legal Disclaimer & Optimization Notice INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS”. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. Copyright © 2014, Intel Corporation. All rights reserved. Intel, the Intel logo, Xeon, Xeon Phi, Core, VTune, and Cilk are trademarks of Intel Corporation in the U.S. and other countries. Optimization Notice Intel’s compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice. Notice revision #20110804 ## Comparison to OpenCL <table> <thead> <tr> <th>OpenCL</th> <th>GFX offload</th> </tr> </thead> <tbody> <tr> <td>Detecting platform and devices in each platform using clGetPlatformIDs() and clGetDevicesIDs() APIs. Supports this feature because OpenCL support in different GPUs shipped by different vendors.</td> <td>Detection of integrated GPU is done by Intel® Graphics Technology offload runtime automatically.</td> </tr> <tr> <td>The program to be run on GPU needs to be put in separate compilation unit (.cl) and build explicitly for target GPU from application host code using clCreateProgramWithSource() and clBuildProgram() APIs.</td> <td>As long as the source code is annotated right with previously mentioned compiler hints, the existing code is ready to be run on integrated GPU.</td> </tr> <tr> <td>Explicitly a command queue for the GPU using clCreateCommandQueue() API.</td> <td>The command queue is created by the GFX offload runtime.</td> </tr> <tr> <td>Kernels have to created from the program build for target using clCreateKernel() API at runtime.</td> <td>The kernels are automatically created in both Synchronous offload and Asynchronous offload during the build process and not during the runtime.</td> </tr> <tr> <td>Kernel invocation is not intuitive rather clEnqueueNDRangeKernel() API is used with the kernel name, number of work groups and number of work items/work group as parameters.</td> <td>No code change required at the call site. Handled by GFX offload runtime.</td> </tr> <tr> <td>Kernel function arguments are passed using clSetKernelArg() API</td> <td>No code change required at the call site. Handled by GFX offload runtime.</td> </tr> <tr> <td>Result from GPU after kernel computation is copied on to host memory using clEnqueueReadBuffer() API.</td> <td>Synchronous offload – Copy of values are handled by the GFX runtime. Asynchronous offload – Explicit copy from GPU to host required.</td> </tr> <tr> <td>Explicitly release all kernel objects, memory objects, command queue, program and context object using clReleaseKernel(), clReleaseMemObject(), clReleaseCommandQueue(), clReleaseProgram() and clReleaseContext() APIs.</td> <td>The release of task objects and buffer objects are done by the GFX runtime automatically.</td> </tr> <tr> <td>Portable solution. Works with any GPU.</td> <td>Non-portable solution. Works only for integrated GPU.</td> </tr> </tbody> </table> Generation and Jitting of vISA - vISA generation is done by the compiler. - Jitter translates vISA to GEN native ISA. Jitters are backward compatible to support previous generations of vISA. <table> <thead> <tr> <th>Compiler</th> <th>Supports 2\textsuperscript{nd} generation Intel\textsuperscript{®} Core\textsuperscript{™} Processors</th> <th>Supports 3\textsuperscript{rd} generation Intel\textsuperscript{®} Core\textsuperscript{™} Processors</th> <th>Supports 4\textsuperscript{th} generation Intel\textsuperscript{®} Core\textsuperscript{™} Processors</th> </tr> </thead> <tbody> <tr> <td>Intel\textsuperscript{®} C++ Compiler 14.0 for GT</td> <td>Y</td> <td>Y</td> <td>Y</td> </tr> <tr> <td>Intel\textsuperscript{®} C++ Compiler 15.0 Beta for GT</td> <td>N</td> <td>Y</td> <td>Y</td> </tr> <tr> <td>MDF runtime version</td> <td>Supports 2\textsuperscript{nd} generation Intel\textsuperscript{®} Core\textsuperscript{™} Processors</td> <td>Supports 3\textsuperscript{rd} generation Intel\textsuperscript{®} Core\textsuperscript{™} Processors</td> <td>Supports 4\textsuperscript{th} generation Intel\textsuperscript{®} Core\textsuperscript{™} Processors</td> </tr> <tr> <td>2.4</td> <td>Y</td> <td>Y</td> <td>Y</td> </tr> <tr> <td>3.0</td> <td>N</td> <td>Y</td> <td>Y</td> </tr> </tbody> </table> ## Intrinsics for GFX <table> <thead> <tr> <th>API</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>_gfx_read_2d</td> <td>Reads a rectangular area into memory pointed to by dst from a 2D image identified by img. The resulting area is laid out in dst memory linearly row by row.</td> </tr> <tr> <td>_gfx_write_2d</td> <td>Writes a rectangular area from memory pointed to by src to a 2D image identified by img.</td> </tr> <tr> <td>_gfx_atomic_write_i32</td> <td>Do atomic operations (enum: __GfxAtomicOpType) on previous values of the destination location using two source values. Used for serializing the operation on the same memory location across iterations for the vector loop. Returns previous value of the memory location.</td> </tr> <tr> <td>_gfx_add_i8/i16/i32/f32/f64</td> <td>Adding the two source inputs</td> </tr> <tr> <td>_gfx_sub_i8/i16/i32/f32/f64</td> <td>Subtraction using the two source inputs</td> </tr> <tr> <td>_gfx_mullo_i8/i16/i32</td> <td>Multiplication for char, short and int data types</td> </tr> <tr> <td>_gfx_mul_f32/f64</td> <td>Multiplication for float and double data types</td> </tr> </tbody> </table> ### Intrinsics for GFX (Contd...) <table> <thead> <tr> <th>API</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>_gfx_slli_i8/i16/i32</td> <td>Left shift the operands (either in saturation mode or</td> </tr> <tr> <td></td> <td></td> </tr> </tbody> </table> EU Features - SIMD instructions - Instruction level variable-width SIMD execution - Instruction compaction - Conditional SIMD execution via destination mask, predication and execution mask - Instruction can be executed in a SIMD pipeline consuming multiple clock cycles - Region-based register addressing - Direct or indirect (indexed) register addressing - Instruction streams and Instruction cache are read only - 3rd generation Intel® Core™ Processors has Gen7 generation of EU which allows dual-issue [Dual-Issue limited to most popular instructions]. Future generations supports more instructions with dual-issue. GFX Instruction syntax ( pred ) inst cmod sat ( exec_size ) dst src0 src1 { inst_opt, ... } • pred – predication • inst – instruction • cmod – conditional modifier • sat – saturation (clamping of the result to the nearest value) • exec_size – Max SIMD width specified for that instruction • dst – Specified as “rm.n<HorzStride>:type” • src0,src1 – Specified as “rm.n<VertStride;Width,HorzStride>:type” The following example assembly language instruction adds two packed 16-element single-precision Float arrays in r4/r5 and r2/r3 writing results to r0/r1, only on those channels enabled by the predicate in f0.0 along with any other applicable masks. (f0.0) add (16) r0.0<1>:f r2.0<8;8,1>:f r4.0<8;8,1>:f GFX Numeric Data types - **Integer data types:** - 8-bit Unsigned/signed integer or byte (UB/B) - 16-bit unsigned/signed integer or word (UW/W) - 32-bit unsigned/signed integer or doubleword (UD/D) - Packed Unsigned Half-Byte Integer Vector, 8 x 4-bit unsigned integer (UV) - Packed Signed Half-Byte Integer Vector, 8 x 4-bit signed integer (V) - **Floating Point data types:** - 32-bit single precision floating point number (F) - 64-bit double precision floating point number (DF) - Packed restricted float vector 8 x 4-bit restricted precision floating point number (VF) Register region and Region Parameters An example of a register region \((r4.1<16;8,2>:w)\) with 16 elements ## Categorization of Intel® HD Graphics <table> <thead> <tr> <th>Graphics Level</th> <th>PC Segment</th> <th>Server Segment</th> </tr> </thead> <tbody> <tr> <td>GT3e</td> <td>Intel® Iris™ Pro Graphics 5200</td> <td>NA</td> </tr> <tr> <td>GT3 (28W)</td> <td>Intel® Iris™ Graphics 5100</td> <td>NA</td> </tr> <tr> <td>GT3 (15W)</td> <td>Intel® HD Graphics 5000</td> <td>NA</td> </tr> <tr> <td>GT2</td> <td>Intel® HD Graphics 4600/4400/4200</td> <td>Intel® HD Graphics P4700/P4600</td> </tr> <tr> <td>GT1</td> <td>Intel® HD Graphics 2500</td> <td>NA</td> </tr> </tbody> </table> Comparison of 3rd generation processor graphics and 4th generation processor graphics <table> <thead> <tr> <th></th> <th>3rd generation Intel® Core™ Processor</th> <th>4th generation Intel® Core™ Processor</th> </tr> </thead> <tbody> <tr> <td><strong>APIs</strong></td> <td>Intel® HD Graphics 4000</td> <td>Intel® HD Graphics</td> </tr> <tr> <td></td> <td></td> <td>Intel® HD Graphics 4200/4400/4600</td> </tr> <tr> <td></td> <td></td> <td>Intel® Iris™ Pro Graphics 5200,</td> </tr> <tr> <td></td> <td></td> <td>Intel® Iris™ Graphics 5100,</td> </tr> <tr> <td></td> <td></td> <td>Intel® HD Graphics 5000</td> </tr> <tr> <td><strong>Execution Units</strong></td> <td>16 EUs</td> <td>10 EUs</td> </tr> <tr> <td><strong>(EUs)</strong></td> <td></td> <td>20 EUs</td> </tr> <tr> <td></td> <td></td> <td>40 EUs</td> </tr> <tr> <td><strong>Floating Point</strong></td> <td>256</td> <td>160</td> </tr> <tr> <td><strong>ops per clock</strong></td> <td></td> <td>320</td> </tr> <tr> <td></td> <td></td> <td>640</td> </tr> <tr> <td><strong>DirectX®</strong></td> <td>DirectX* 11.0</td> <td>DirectX* 11.1</td> </tr> <tr> <td><strong>Shader Model 5.0</strong></td> <td>DirectX Shader Model 5.0</td> <td>DirectX Shader Model 5.0</td> </tr> <tr> <td><strong>OpenGL®</strong></td> <td>OpenGL* 4.0</td> <td>OpenGL* 4.2</td> </tr> <tr> <td><strong>OpenCL®</strong></td> <td>OpenGL* 1.1</td> <td>OpenCL* 1.2</td> </tr> </tbody> </table> GT3 Top Level Block Diagram (4th generation Intel® Core™ Processors) - 5 EUs Per row - 2 rows per subslice - 2 subslices per slice - 2 slices (40 EUs total) in GT3 - 7 Threads Per EU - 280 threads in GT3 - 128 Registers per thread - 4KB per thread! - 1120KB in regfile in GT3 - 32K IC in each ROW (5EUs) - 256KB data cache per slice (L3 only) GT2 Top Level Block Diagram (4th generation Intel® Core™ Processors) GT1 Top Level Block Diagram (4th generation Intel® Core™ Processors) Asynchronous offload – Enqueue multiple kernels ```c __declspec(target(gfx_kernel)) void myKernel2(__gfx_surface_index si2d, int height, int width) { _cilk_for(int r = 0; r < height; r++){ float tile[TILE_HEIGHT][TILE_WIDTH]; for (int c = 0; c < width; c+=TILE_WIDTH){ _gfx_read_2d(si2d, c*sizeof(float), r, tile, 32, 8); tile[:,] += 20; _gfx_write_2d(si2d, c*sizeof(float), r, tile, 32, 8); } } } ... float * arr2dUnaligned = new float[h * w]; ... GfxImage2D<float> i2d(arr2dUnaligned, h, w); // New API to represent 2D memory __GFX_enqueue("myKernel2", i2d, h, w); // 2D image reuse between kernels. __GFX_enqueue("myKernel2", i2d, h, w); __GFX_wait(); // Blocks until all GPU tasks are done // i2d destructor is called here: // - writes data back to CPU // - sees that reference count is zero and // destroys the underlying surface also j ``` ### Hardware supported - **Hardware support** <table> <thead> <tr> <th>3rd/4th generation Intel® Core™ Processors</th> <th>Intel® Pentium® Processors</th> <th>Intel® Celeron® Processors</th> </tr> </thead> <tbody> <tr> <td>Intel® HD Graphics 2500</td> <td>Processor Model Numbers: 20xx</td> <td>Processor Model Numbers: 10xx</td> </tr> <tr> <td>Intel® HD Graphics 4000</td> <td></td> <td></td> </tr> <tr> <td>Intel® HD Graphics 4200</td> <td>Processor Model Numbers: 3xxx</td> <td>Processor Model Numbers: 29xx</td> </tr> <tr> <td>Intel® HD Graphics 4400</td> <td></td> <td></td> </tr> <tr> <td>Intel® HD Graphics 4600</td> <td></td> <td></td> </tr> <tr> <td>Intel® HD Graphics 5000</td> <td></td> <td></td> </tr> <tr> <td>Intel® HD Graphics 5100</td> <td></td> <td></td> </tr> <tr> <td>Intel® HD Graphics 5200</td> <td></td> <td></td> </tr> </tbody> </table> - For more information on hardware, please visit: [http://ark.intel.com](http://ark.intel.com) GFX Object Lifetime management - The runtime shipped with HD driver maintains task, buffer and image objects during the program execution. - Task object created when the kernel is enqueued and stays alive still successful completion of _GFX_wait(). - Buffer and Image objects are both reference managed. Runtime tracks the reference to these managed objects from the user code and from enqueued kernels. - The buffer and image objects are destroyed by the runtime when reference to this object or reference is 0. - Buffers are created lazily when kernel is enqueued to avoid redundant reallocations for many _GFX_share calls. ## Comparison of GT3, GT2 and GT1 on 4th generation Intel® Core™ Processors <table> <thead> <tr> <th>Description</th> <th>GT3</th> <th>GT2</th> <th>GT1</th> </tr> </thead> <tbody> <tr> <td>Slice count</td> <td>2</td> <td>1</td> <td>1</td> </tr> <tr> <td>Subslice count</td> <td>4</td> <td>2</td> <td>1</td> </tr> <tr> <td>EUs (total)</td> <td>40</td> <td>20</td> <td>10</td> </tr> <tr> <td>Threads (total)</td> <td>280</td> <td>140</td> <td>70</td> </tr> <tr> <td>Threads/EU</td> <td>7</td> <td>7</td> <td>7</td> </tr> <tr> <td>L3 cache size</td> <td>1024KB</td> <td>512KB</td> <td>256KB</td> </tr> </tbody> </table>
{"Source-Url": "https://software.intel.com/sites/default/files/managed/05/fe/10_2016_GraphicsOffload.pdf", "len_cl100k_base": 8328, "olmocr-version": "0.1.50", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 62770, "total-output-tokens": 9551, "length": "2e13", "weborganizer": {"__label__adult": 0.0006518363952636719, "__label__art_design": 0.0006818771362304688, "__label__crime_law": 0.0004682540893554687, "__label__education_jobs": 0.00027632713317871094, "__label__entertainment": 0.00010865926742553712, "__label__fashion_beauty": 0.0003261566162109375, "__label__finance_business": 0.0002567768096923828, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.0012903213500976562, "__label__hardware": 0.0225677490234375, "__label__health": 0.0005340576171875, "__label__history": 0.00027298927307128906, "__label__home_hobbies": 0.00014734268188476562, "__label__industrial": 0.0012102127075195312, "__label__literature": 0.0002092123031616211, "__label__politics": 0.0002753734588623047, "__label__religion": 0.0008502006530761719, "__label__science_tech": 0.049468994140625, "__label__social_life": 5.4001808166503906e-05, "__label__software": 0.00980377197265625, "__label__software_dev": 0.90869140625, "__label__sports_fitness": 0.0004949569702148438, "__label__transportation": 0.000789642333984375, "__label__travel": 0.0002378225326538086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32379, 0.02278]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32379, 0.16352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32379, 0.68944]], "google_gemma-3-12b-it_contains_pii": [[0, 131, false], [131, 510, null], [510, 863, null], [863, 1654, null], [1654, 2198, null], [2198, 2616, null], [2616, 3042, null], [3042, 3274, null], [3274, 3561, null], [3561, 5432, null], [5432, 5956, null], [5956, 6515, null], [6515, 7505, null], [7505, 8175, null], [8175, 8897, null], [8897, 9382, null], [9382, 9830, null], [9830, 10418, null], [10418, 10888, null], [10888, 11184, null], [11184, 11803, null], [11803, 12567, null], [12567, 12966, null], [12966, 13493, null], [13493, 13992, null], [13992, 14910, null], [14910, 15407, null], [15407, 17429, null], [17429, 17429, null], [17429, 19843, null], [19843, 22429, null], [22429, 23686, null], [23686, 24044, null], [24044, 24664, null], [24664, 25373, null], [25373, 25965, null], [25965, 26074, null], [26074, 26766, null], [26766, 28637, null], [28637, 28987, null], [28987, 29056, null], [29056, 29125, null], [29125, 30040, null], [30040, 31244, null], [31244, 31871, null], [31871, 32379, null]], "google_gemma-3-12b-it_is_public_document": [[0, 131, true], [131, 510, null], [510, 863, null], [863, 1654, null], [1654, 2198, null], [2198, 2616, null], [2616, 3042, null], [3042, 3274, null], [3274, 3561, null], [3561, 5432, null], [5432, 5956, null], [5956, 6515, null], [6515, 7505, null], [7505, 8175, null], [8175, 8897, null], [8897, 9382, null], [9382, 9830, null], [9830, 10418, null], [10418, 10888, null], [10888, 11184, null], [11184, 11803, null], [11803, 12567, null], [12567, 12966, null], [12966, 13493, null], [13493, 13992, null], [13992, 14910, null], [14910, 15407, null], [15407, 17429, null], [17429, 17429, null], [17429, 19843, null], [19843, 22429, null], [22429, 23686, null], [23686, 24044, null], [24044, 24664, null], [24664, 25373, null], [25373, 25965, null], [25965, 26074, null], [26074, 26766, null], [26766, 28637, null], [28637, 28987, null], [28987, 29056, null], [29056, 29125, null], [29125, 30040, null], [30040, 31244, null], [31244, 31871, null], [31871, 32379, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32379, null]], "pdf_page_numbers": [[0, 131, 1], [131, 510, 2], [510, 863, 3], [863, 1654, 4], [1654, 2198, 5], [2198, 2616, 6], [2616, 3042, 7], [3042, 3274, 8], [3274, 3561, 9], [3561, 5432, 10], [5432, 5956, 11], [5956, 6515, 12], [6515, 7505, 13], [7505, 8175, 14], [8175, 8897, 15], [8897, 9382, 16], [9382, 9830, 17], [9830, 10418, 18], [10418, 10888, 19], [10888, 11184, 20], [11184, 11803, 21], [11803, 12567, 22], [12567, 12966, 23], [12966, 13493, 24], [13493, 13992, 25], [13992, 14910, 26], [14910, 15407, 27], [15407, 17429, 28], [17429, 17429, 29], [17429, 19843, 30], [19843, 22429, 31], [22429, 23686, 32], [23686, 24044, 33], [24044, 24664, 34], [24664, 25373, 35], [25373, 25965, 36], [25965, 26074, 37], [26074, 26766, 38], [26766, 28637, 39], [28637, 28987, 40], [28987, 29056, 41], [29056, 29125, 42], [29125, 30040, 43], [30040, 31244, 44], [31244, 31871, 45], [31871, 32379, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32379, 0.16838]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
9d98d96a4a25c7374ab1ad87f3664277a845e947
Usability and user generated content: Web 2.0 sites with complex data structures Wouter Kool w.l.c.kool@student.utwente.nl ABSTRACT The use of the Web is changing and users demand more collaborative and interactive web environments. A philosophy in which several ideas and techniques are combined to meet these expectations is called Web 2.0. The success of Web 2.0 applications depends on the amount of people that use it. Therefore website usability becomes even more important than it already was. This research focuses on finding usability issues of Web 2.0 websites with complex data structures. Theorymaps, an existing Web application in which users can create, visualize and compare research theories has been tested using ‘Thinking aloud’ testing. The retrieved problems are analyzed and classified. Based on the results found in this case study future research is proposed. Keywords Web 2.0, Rich Internet Application, usability, human computer interaction 1. INTRODUCTION During the last few years the use of the Web has been changing rapidly. As websites exclusively based on HTML are no longer considered useful enough, more websites are changed to interactive services. When the Web was invented and the first web client was developed by Tim Berners-Lee [27], it was not only used to read the Web, but also to write it. However, when the popular web browser Mosaic made the Web accessible for a large community, the possibility to easily create content within the Web browser disappeared [27]. Now, a new trend in web application design, which focuses on user participation and collective intelligence, instead of one way communication between website owner and user, arises. This trend conveys a broad area of techniques and ideas which often is referred to as Web 2.0 [25]. With Web 2.0 great possibilities and opportunities in communication and knowledge sharing arise. Consequently more and more companies adapt their websites to the new Web 2.0 ideas to give the users the experience they want, by creating interactive websites and web applications which provide participation, collaboration and rich information. This change will result in new usability issues, because people are used to and understand the page-based model of the Web [26]. However, due to the fast changing environment and the great technical possibilities, designers focus on efficiency and effectiveness, but often overlook principles of good design and usability of their applications [26]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. 12th Twente Student Conference on IT, January 22, 2010. Copyright 2010, University of Twente, Faculty of Electrical Engineering, Mathematics and Computer Science. In Web 2.0 the user is the main value creator and therefore user experience is an important factor, however little research has been done about usability concerning Web 2.0 or user generated content. Many articles can be found that discuss the phenomenon Web 2.0, its characteristics and the techniques behind it [5,9,12,17,25,26]. Some of these articles also say that Web 2.0 will result in new usability issues; however the kind of issues is not discussed. Only one article, written by Nielsen [24], about the kind of problems that occur with these websites, was found. Nielsen says that many websites fail to get the basics of their site right because they rush too much to Web 2.0 functionality. This functionality does not work without good basics. Although these problems occur with Web 2.0 websites, they are not directly related to Web 2.0 technologies. They exist because old principles of design are neglected. Research about usability regarding Web 2.0 environments with complex data was not found. This research aims to explore and analyze the usability issues of a Web 2.0 application with complex data structures. A web application called ‘Theorymaps’ [33], which people can use to create, visualize and compare complex data structures will be used [18,19]. In this study complex data structures are causal relations proposed in scientific articles. Further clarification on complex data structures will be given in Chapter 2. The main research question is: What usability issues occur using a Web 2.0 application, working with complex data structures? 2. RESEARCH BACKGROUND 2.1 The Web 2.1.1 Web 2.0 The term Web 2.0 was first used by Tim O’Reilly in 2004 [26]. He concluded that the dot-com crash was a turning point for the Web. The characteristics of the websites that survived the crash marked and started a new philosophy. Many researchers have tried to specifically describe what the term includes, but a complete consensus is not yet reached. This is partly due to the fact that Web 2.0 consists of a broad area of new ideas to use the Web rather than a new technology [31]. Hoeg et al. [35] describe Web 2.0 as “the philosophy of mutually maximizing collective intelligence and added value for each participant by formalized and dynamic information sharing and creation”. Collective intelligence can be described as all available knowledge in a network of people and computers, which combined contains more intelligence than every individual person, group or computer can have by itself [13]. Hoeg’s description means that the Web must not longer be seen as a one way communication network from companies and web designers to consumers, but a platform where everybody can easily access, share and design their own content. A Web 2.0 platform consists of providers which facilitate the way of communication, but the main factor that creates value is the user, who provides to the platform vast amount of content which is available for every other user in the world [12]. Implementing the ideas of Web 2.0 in their websites and Web applications will help organizations to adapt to changes in their environment and keep their competitive positions. Globalization and a shift towards a knowledge-based economy require a need for change in information management and value creation which Web 2.0 can provide [35]. In today's Web environment users demand free, democratic, interactive and rich services and content [17]. The fulfillments of these requirements determine the user experience or the level of user satisfaction. A good experience will keep users coming back and will let the amount of valuable information that is created by them grow. In the knowledge-based economy this information has become a main source of value creation [35]. 2.1.2 Characteristics The features of websites that fit the Web 2.0 philosophy are fundamentally different than those of websites which do not. Web 2.0 providers know that it is important to reach as many people as possible, because the more people will use their product, the more people are attracted and the more rich, extended and dynamic the information shared on their website will be [12]. To make their website accessible for a wide variety of users which want to participate and collaborate with each other, they make their service straightforward, because “users will not spend too much time or effort into a website they cannot rapidly decipher” [12]. Besides intuitivism the website has to appeal, which means it has to be dynamic and interactive, it has to contain rich information and it has to have a social network structure. In this context a network structure means the users are not only connected to the service provider, but also to other users and their created content. Websites or web applications that fulfill these requirements are behaving more and more like desktop applications [36]. However, the user has the benefits of being connected and does not have to download and install software. Furthermore, services can be updated and tested in real time with real users and without explicit user participation. Christopher Alexander [25] described this as ‘the perpetual beta’. Another important characteristic of Web 2.0 is ‘users as co-developers’: for example, many Web 2.0 sites have a user-generated ‘folksonomy’ instead of the traditional hierarchical taxonomy [32]. The last important difference between Web 2.0 and Web 1.0 is the use of data. Web 2.0 has similarities with the open source movement, while traditional software developers keep their know-how desperately within the company. Open Source software can be used and changed by any user. This results in dynamic software that can be easily adapted to user’s different needs. Web 2.0 providers often also implement the idea of free distribution and use to gain a large amount of user data. For example, Youtube uses closed source software, but everybody can embed the videos on their own site. Moreover Web 2.0 providers sometimes provide users with open Web Application Programming Interface (API), a set of definitions which can be used for communication between an application and another software program. An API can also be used to develop applications that use the data of the Web 2.0 site. A good example is Google Maps [10], which has an API that can be used to embed the application on a website and to manipulate and add content to the maps. Thus in this trend not only the official producer can develop an interface, but everybody can [32]. As a result, more web applications that collect data from several sources and display them in one interface are developed. Such application is called a mashup [5]. 2.1.3 Rich Internet Applications Traditional internet applications are build on the 2-tier client-server model and use exclusively HyperText Markup Language (HTML) to present content. However, HTML alone cannot provide users with dynamic applications. An important technological trend related to Web 2.0 is the Rich Internet Application (RIA). RIAs use asynchronous techniques such as AJAX to enable real time data communication and dynamic rendering [9]. These applications provide the user with a dynamic interface which is quite similar to an interface of a desktop application. The most well-known type of RIA is created using ‘Asynchronous Javascript and XML’ (AJAX) [9]. AJAX is not a technique on its own; it is a combination of technologies. The most important technologies are JavaScript, XHTML, XML, CSS, XML, XSLT and XMLHttpRequest. These technologies make it possible to update only the necessary part of a webpage, which decreases the required bandwidth [36]. AJAX also makes it possible to fetch data in the background. This results in smooth interaction, like a desktop application, because the user interface remains usable while handling the request. Another aspect of RIAs is the 3-tier client-server model. Many Web 2.0 applications are based on this model [9], which consists of a client tier, an application tier and a database tier. In a 3-tier structure client requests are not directly sent to the database server, but to the application tier which can consist of Web servers and application servers. The Web servers can ensure load balancing by passing client requests to application servers which provide the application logic. To do this the application server uses data from the database server. This results in smaller response times. For example, if in a 2-tier structure a large group of users send requests both simultaneously and directly to a database server, the server could be overloaded. If instead requests are sent to a ‘middle tier’ which can do the calculations and logic and balance the requests over more different servers, the server load can be decreased. Improved performance of RIA’s is also accomplished because more of the application’s functionality has been placed in the client browser. This means that the client provides extended functionality which reduces the amount of server communication [36]. 2.2 Theorymaps To explore usability problems in a Web 2.0 platform with complex data structures, a web application called Theorymaps [33] has been used to find these kinds of problems. In this section Theorymaps will be described and its relevance to Web 2.0 and complex data structures will be explained. 2.2.1 Description Theorymaps is a tool to transform traditional scientific articles into causal maps [18, 19]. A user can create his/her own causal map by using a wizard which consists of three different steps: 1. Describe theory 2. Add variables 3. Add causal links Completing these steps results in an automatically generated causal map. Now the theory can be compared to other theories which use the same variables. It is also possible to map the correlation between variables when observing them or when doing experimental interventions. Henceforth different theories can be compared and resemblances and contradictions between them are detected. Besides correlation users can adjust the state of a variable (increasing, decreasing, constant) when observing it or with making interventions to see the effect on other variables within that theory. The tool provides users of new methods of visualizing, analyzing, comparing and searching for theories. 2.2.2 Web 2.0 Theorymaps can be labeled a Web 2.0 web application. It contains all the main characteristics mentioned in section 2.1, which are: - Intuitive/straightforward - Dynamic - Interactive - Collaborative These characteristics should lead to a large pool of users and that leads to a website which has rich and extended information. Theorymaps asks users to create their own theories and compare them with theories of others. This is an interactive and dynamic process in which the provider of the website is only the facilitator and users together create a collective intelligence which becomes greater with every new theory that is added to the application. Being labeled Web 2.0 means that the website and the interactive functions have to be intuitive and therefore straightforward. That is the main factor the usability depends on. 2.2.3 Complex data structures Theorymaps is a web application in which causal maps can be created. A causal map is an example of a complex data structure. A data structure can be defined as: a way in which sets of data are organized in a particular system. Data structures that are simple do not have an extended and complicated organization. An example is a ‘tag’. Tags are keywords used to describe a part of content, for example a picture on Flickr or a video on YouTube. They can be used to easily find other content that contains the information the tag refers to. The tag and the content it refers to have a one link structure, a simple data structure. A complex data structure consists of a data organization which looks like a graph. In Theorymaps variables of a theory can be linked to every other variable created. This can result in a complex structure in which not only variables are connected but different theories can be linked to each other as well. These structures are complex data structures. 2.3 Usability 2.3.1 Definition A definition of usability is given by Dumas and Redish [7]: “Usability means that the people who use a product can do so quickly and easily to accomplish their own tasks”. The definition presumes four points: - Usability means focusing on users - People use products to be productive - Users are busy people trying to accomplish tasks - Users decide when a product is easy to use Another definition more specialized to websites is given by Krug [15]. He states that a website is usable if it is self evident. A user should be able to understand what the website is about, how it should be used, without expending any effort thinking about it. He presumes three facts: 1. People do not read pages. They scan them 2. People do not make optimal choices. They satsifice. 3. People do not figure out how things work. They muddle through. Satisficing is a word coined by Herbert Simon [15]. It is a combination of the words satisfying and sufficing, and it refers to a strategy in which a user chooses the first reasonable option, instead of searching for the best option. The three facts happen because for most people it does not matter if they understand how something works as long as they can use it. Furthermore people do not search for better ways once they found one. If a better options comes across they will use it, but they will seldom look for one [15]. 2.3.2 Usability testing Usability greatly influences user satisfaction and the users’ willingness to revisit the website [1]. Research [4] showed that poor usability resulted in failure of 40% of shop transactions and 50% of potential sales. Therefore usability is one of the most important quality aspects, especially for popular websites [2]. Because Web 2.0 websites (including Theorymaps) focuses on attracting as many users as possible, the importance of usability testing is even greater. Many website designers know that their website has to be straightforward and intuitive, but they have the feeling that the results of usability testing will not even match the time and costs it absorbs [15]. However, still many website are not sufficiently usable. Designers fail to think from a user perspective in the right way. Knowledge about how the website works and moreover contextual information about the subjects and technical design makes it almost impossible to understand the way a user without this knowledge will react to the designer’s website [15]. 2.3.3 Techniques Usability testing is one of many techniques to make sure a website is well designed and people will be able to use the website without real problems. Other techniques are the heuristic evaluation or the cognitive walkthrough. These techniques however do not involve user testing. Nielsen [30] stated that the most effective source of information concerning the usability of a website is provided by laboratory testing of users. There are several methods which can be applied to user testing. Two of the most well known and widely used, are ‘Undertaking tasks’ and ‘Thinking aloud’ [1, 34]. Undertaking tasks is a method in which a test user browses a web page using one or more tasks created by the test designer. Thinking aloud means the test user is asked to say everything he/she thinks out loud during the test. To be able to detect and analyze problems, thinking aloud has proved to be a valuable observation method [34]. Both Undertaking tasks and Thinking aloud were used in this research. 3. METHODOLOGY 3.1 Qualitative research There are different ways to do research depending on the research situation and desired results. In this paper a qualitative research instead of a quantitative research has been done. The aim of qualitative research is a complete and detailed description (of a situation). It is a subjective research which means the individual interpretation is import and the resulting data is more rich, but less useful for generalization. Quantitative research consists of explaining situations in which events can be generalized, counted and statistically analyzed [28]. In usability testing a qualitative research is often used. Only a few test users are questioned in depth with the purpose of finding as much different individual problems instead of patterns or other generalizations. These problems and their causes are unpredictable and therefore every single result is useful on its own. For example, if a website designer wants to know if his navigation structure is clear to users, he does not want to know what percentage of a large group of test users says ‘yes’ or ‘no’, he wants to know what problems occur and why these problems occur. 3.2 Test users 3.2.1 Number of test users Because a qualitative research aims to produce rich information only a few test users will be necessary to find the results I want. Nielsen [20, 21, 23] discovered that a good usability study does not have to involve a large number of test users. He studied the relationship between the number of test users and the percentage of usability problems found. He concluded that with a number of fifteen test users almost all problems would be found, but that the optimum is around four test users. The contribution of every extra user would fall exponentially (Figure 1). Nielsen calculated the optimum based on three assumptions: 1. Usability testing involves costs. Therefore using test users which do not discover new problems will make test less efficient and discovering all problems would be expensive. 2. The researchers attend to do iterative testing which means that after the first test round has taken place, the web application will be adjusted and another test round will be done. 3. The most severe problems are among the first problems found. In case of the optimum amount of users the percentage of problems found would be approximately 75%. This is also supported by Krug [15]. In my research I did not face user compensation or other costs, however because of a short time span and the intention to do more test rounds I chose to use five test users. The amount of five users also seemed sufficient according to the problems found during testing. The fourth test user discovered 26 problems. Only 6 of them were new problems. The last user found 4 new problems of a total of 30. 3.2.2 Profile The test users were chosen without special requirements except that they had to be students or University staff. Using test users with a different background would not improve the quality of the results, because Theorymaps mainly aims at this target group. Four of five participating test users are students and one is part of the University Staff. I intended to use test subjects with different backgrounds, interests and internet experience. Based on the information I had in advance about their profiles the subjects were chosen. All of them were asked about their study and online behavior to identify their internet experience and background. This information indeed turned out to be very varied from no technological background and few hours spend online per week to an (Information) Technology background and much internet experience (Table 1). This makes the pool of users a good reflection of the variety that will be encountered in the real target group. <table> <thead> <tr> <th>Table 1. User profile</th> </tr> </thead> <tbody> <tr> <td>Hour browsing</td> </tr> <tr> <td>(week)</td> </tr> <tr> <td>User 1</td> </tr> <tr> <td></td> </tr> <tr> <td>User 2</td> </tr> <tr> <td></td> </tr> <tr> <td>User 3</td> </tr> <tr> <td></td> </tr> <tr> <td>User 4</td> </tr> <tr> <td></td> </tr> <tr> <td>User 5</td> </tr> <tr> <td></td> </tr> <tr> <td></td> </tr> </tbody> </table> 3.3 Methods The methods used in the testing are ‘thinking aloud’, face expression and screen capturing. The test users were asked to tell everything they thought while exploring the website. This sound was recorded and next to that a film camera recorded the facial expressions. Special software recorded every action on the screen. Afterwards these recordings could be synchronized which made it possible to analyze the data together at the same time. During the tests the users had to complete two types of tasks: problematic tasks and structured tasks. Structured tasks are tasks in which the user is directed from page to page and has extensive guidelines. Problematic tasks have a goal, but no guidelines. For example, a user who has to order a specific book on an online book store faces a problematic task and a structured task is asking a user to find the search feature on the current web page to search for a specific book. After this task more structured tasks follow and eventually the same goal will be reached [1]. Research [1] showed that structured tasks reveal more problems on average, but problematic tasks reveal more severe problems. In my tests the main tasks were problematic and between these tasks structured tasks were carried out. During both tasks direct questions about website appearance, choices and problems were posed. After completing all tasks I asked the users to mark the usability of Theorymaps. The tests took approximately fifty minutes. A script of the test can be found in Appendix A. 4. FINDINGS The results of five usability tests are presented in the following. The first section lists the actual problems that were found. Section 4.2 and 4.3 illustrate a severity assessment and a heuristic classification, which is followed by an analysis of the results. Finally, in section 4.5 the test users’ satisfaction is presented. 4.1 Problems Five tests resulted in a total of 27 usability problems. A general classification is used to structure the problems. The classes are ‘problems in website structure or appearance’, ‘problems in understanding the website’ and ‘problems in programming’. 4.1.1 Problems in website structure or appearance P1. Clicking on the website logo does not navigate to the start page. The logo navigates to ‘Theories’ which is not the start page. Because the start page itself is not clear, this leads to extra confusion. P2. White buttons on top of the page do not stand out enough. The white buttons are not seen by the test users unless they want to make a theory or adjust it. The buttons ‘Correlation’ and ‘What-if’ are not recognized and therefore the test users do not know what the possibilities of the website are. It looks like they do not expect the website functions having the same kind of button to make or adjust a theory. P3. It is not clear enough that the variables are on the left and that you can drag them. Because the page has a lot of ‘attractive’ elements, the variables are not seen directly, while in the best situation to understand the function, the variables and corresponding explanation should be focused on first. P4. The ‘what if’ button is interpreted as a help function. The question mark button is related by users to a help function. The label ‘what if’ enhances that believe. P5. It should be possible to move variables directly within the table. There were situations where it was possible to move the variables directly, but not in all situations. This does not work efficiently. P6. The possibility to drag a map during the wizard gives the idea it is directly adjustable. P7. The titles in the colored area on top of the page are not read. The titles in the colored areas are not read because it seems to be connected with the headers. Apparently users do not connect the text in the header with the information on that web page. Therefore important information is unknown. For example, the concept ‘variable’ is not completely clear for every user and not knowing that a variable is able to increase or decrease, which is noted in the title, can lead to mistakes. P8. It is not possible to connect variables to creators. If a user adds a variable that already exists in the system, a definition is added to this existing variable. The definitions can be different, but only the first creator of the variable (and first definition) is named. If a second creator wants to use the variable in his theory, he may want to use the second definition, but he cannot indicate this. P9. Error messages are not very prominent. 4.1.2 Problems in understanding the website P10. It is not clear enough what the correlation function does. The term correlation itself is not sufficient to understand the function. The results were not understood or misinterpreted and even the fact that different theories could be compared was not always clear. P11. It is not clear enough how to remove a variable. Test users did find the box after a short while, but it was clear that in an interactive environment it annoys people if they cannot find a simple function quickly. P12. It is not clear enough what is meant by intervening (what if function). Test users did not really understand the difference between ‘observations’ and ‘experimental interventions’ P13. The navigation in the wizard is not clear enough. After completing the first step it is not possible to go back to this step. It is possible to go further, but there are two different ways to do this. One of the two options however does not have a button to finish the wizard. Using this option confuses the user about finishing the wizard. Almost all the test users expected to see navigation buttons at the bottom of the web page. P14. The concept tag is not understood. The concept ‘tag’ is not known or understood by every user. Hence, people do not add tags and confuse them with variables and theories. Especially tags at the variables are confusing, because both often consist of only one word. P15. It is not exactly clear what a scope is. P16. The bookmark function is not clear and not interesting enough. There are a few problems concerning the ‘add bookmarklet’ page. The browser names can give the user the idea that the information is about browser compatibility which many users do not find interesting. The second problem is the use of the bookmarklet; it is not clear how to use the button. If the user found an article in Scopus it is not clear that the button has to be clicked. The third reason why users are not willing to use the feature is because they do not know what the function is and why it is advantageous to them. Some users will see it as a normal favorite link. The fourth problem is that some users do not have an English browser which can make it hard to find the bookmarks toolbar and because this is the first step, users can be discouraged. The last problem is that not every user knows differences in browsers and does not know which one to take or even the fact that they only have to do the steps of their own browser. P17. The website does not contain a general explanation about its purpose and functionality. The main problem that occurred was the insufficient amount of information. The homepage is not recognized as a homepage because there is not any information about how the website works. People expect more explanation and some help function which they can use when something is unclear. It is not possible to find out how the website works by just clicking around. Because users do not understand the idea and the working of the website other problems occur: Problem: What is Scopus and what can I do exactly on this webpage? Problem: Who are the ‘others’, mentioned in the first wizard form? Problem: What is the connection between the theories on the ‘theory’ page and the website itself? Problem: What are the relationships between the theories? Problem: Can I enter my own theory or is it just a database/wiki? These problems will result in a quickly abandonment of the website. P18. It is not clear what changes if you register and log in. It is not clear why a user should sign up. This is because the idea of the website is not clear at the beginning, but also because the users are not told and do not see the difference. P19. It is hard to understand the meaning of ‘because’ in the ‘cause-and-effect’ links. P20. The symbols + and - are seen as positive and negative, the concept variable is hard to understand. As mentioned in the problem about the titles, the short explanation about a variable does not stand out. Also the words ‘cause’ and ‘effect’ are overlooked. 4.1.3 Problems in programming P21. When enlarging a map from the relations graph it can fall off the screen. The idea is good to use scroll bars, but the maps only occur when pointing the mouse to the relations graph. This makes it impossible to use the scrollbars. P22. You lose the Scopus page after submitting the article. It can be useful to check, adjust or add information from Scopus after using the ‘post to Theorymaps’ button. P23. Password has to be entered again after clicking on an external link. I do not know if this is possible by adjusting the code, but it is annoying if you enter a password and it is gone after visiting a provided external link. P24. The update button in the wizard does not work all the time. There are possible situations in the wizard when the update button does not work properly. It can be fixed after going back to the start, but it is not only a one time problem or delay. P25. Website disappears during step 2 and 3 of the wizard. P26. After clicking on ‘add citation’ it is not possible to use the browsers return button and to return to the same page you were before. Many users are so used to the browsers ‘previous page’ button that they rather use that than a button on the webpage. P27. Within the Safari explanation an extra space is needed: “Post to Theorymaps to your Bookmarks” Toolbar. 4.2 Severity assessment A severity assessment classifies problems looking at their impact on the users’ ability to use the website and to accomplish the tasks they want to undertake. The classification I used has been used by Nielsen [1]. The five classes are: 0: A problem but not a usability problem 1: Superficial problem: should be fixed if there is enough time 2: Minor problem: low priority to be fixed 3: Major problem: important to be fixed 4: A usability disaster: imperative to be fixed The assignment of a problem to a class is based on three factors: problem impact, persistence and frequency. Table 2: problem severity <table> <thead> <tr> <th>Problem</th> <th>Class</th> </tr> </thead> <tbody> <tr> <td>P1</td> <td></td> </tr> <tr> <td>P2</td> <td></td> </tr> <tr> <td>P3</td> <td></td> </tr> <tr> <td>P4</td> <td></td> </tr> <tr> <td>P5</td> <td></td> </tr> <tr> <td>P6</td> <td></td> </tr> <tr> <td>P7</td> <td></td> </tr> <tr> <td>P8</td> <td></td> </tr> <tr> <td>P9</td> <td></td> </tr> <tr> <td>P10</td> <td></td> </tr> <tr> <td>P11</td> <td></td> </tr> <tr> <td>P12</td> <td></td> </tr> <tr> <td>P13</td> <td></td> </tr> <tr> <td>P14</td> <td></td> </tr> <tr> <td>P15</td> <td></td> </tr> <tr> <td>P16</td> <td></td> </tr> <tr> <td>P17</td> <td></td> </tr> <tr> <td>P18</td> <td></td> </tr> <tr> <td>P19</td> <td></td> </tr> <tr> <td>P20</td> <td></td> </tr> <tr> <td>P21</td> <td></td> </tr> <tr> <td>P22</td> <td></td> </tr> <tr> <td>P23</td> <td></td> </tr> <tr> <td>P24</td> <td></td> </tr> <tr> <td>P25</td> <td></td> </tr> <tr> <td>P26</td> <td></td> </tr> <tr> <td>P27</td> <td></td> </tr> </tbody> </table> 4.3 Heuristic classification Heuristic evaluation is a usability evaluation technique that evaluates human-computer interaction. This means that it mainly focuses on the interface of an application. The technique, developed by Nielsen [11, 22], consists of ten heuristics which all represent a set of website characteristics. During an evaluation a panel of expert users uses the heuristics as a checklist to test the interface of an application. The heuristics can be found in Table 3. In this research I did not test with experts, but with users that represent the actual audience, so I did not use this technique itself. However I did use the ten heuristics for classification purposes. The heuristics cover the important characteristics of a website. Therefore, every (interface) problem that was discovered during my tests would be related to one of the ten heuristics. This classification would give a good view on which kind of characteristics the focus should be regarding usability improvement. For this reason I classified the problems found using Nielsen’s heuristics (Table 3). <table> <thead> <tr> <th>Heuristic Classification</th> <th>Number of problems</th> <th>Problems</th> </tr> </thead> <tbody> <tr> <td>1. Visibility of system status</td> <td>1</td> <td>P7</td> </tr> <tr> <td>2. Match between system and the real world</td> <td>0</td> <td></td> </tr> <tr> <td>3. User control and freedom</td> <td>3</td> <td>P8, P13, P22</td> </tr> <tr> <td>4. Consistency and standards</td> <td>3</td> <td>P1, P3, P5</td> </tr> <tr> <td>5. Error prevention</td> <td>0</td> <td></td> </tr> <tr> <td>6. Recognition rather than recall</td> <td>3</td> <td>P2, P4, P25</td> </tr> <tr> <td>7. Flexibility and efficiency of use</td> <td>2</td> <td>P6, P11</td> </tr> <tr> <td>8. Aesthetic and minimalist design</td> <td>1</td> <td>P16</td> </tr> <tr> <td>9. Help user recognize, diagnose and recover from errors</td> <td>1</td> <td>P9</td> </tr> <tr> <td>10. Help and documentation</td> <td>8</td> <td>P10, P12, P14, P15, P17, P18, P19, P20</td> </tr> </tbody> </table> Note: P21, P23, P24, P26 and P27 are programming problems which cannot be classified using these heuristics. 4.4 Analysis Although the website seemed really clear and usable before testing, 6 of the 27 problems that occurred are a usability disaster. Only 3 problems will not obstruct users of smoothly using the web application. Considering the heuristic classification I see that 8 of the 22 problems lie within heuristic 10: help and documentation. Combining the severity and the class (heuristic) of the discovered problems, I see that from 10 of the problems with a severity code of 3 or 4, 80% is concentrated on only three heuristics: 3, 6 and 10. User control, Recognition and Help are the most troubled areas. Help and documentation can be considered the most important class; of the 6 disaster problems 3 are classified heuristic 10. The direct usability problems were not the only outcome of the test sessions. Information about user behavior when browsing a website was also retrieved. This information could be important context information, when considering the obtained view based on the discovered usability problems. The most interesting discovery was the way the test users browsed and used the web application. They explored most of the website by trial and error. When opening a webpage, they immediately focused on the graphs, maps and tables. They clicked what was logical to them and only if they got stuck they tried to read something. But even then they scanned a text rather than read it. This means that every interaction that is possible on the website has to be intuitive. For example, every user tried to adjust the map by clicking and dragging it directly. One user said that if a website contains only of text he reads it, but when the website contains graphs or other more attractive parts, that willingness disappears. This explains why the greater part of the problems is related to information about the working, context and purpose of the application. It seems an interactive environment makes a user even more impatient than a more informative website. This impatience has to be compensated with an application that is totally clear and intuitive. Another remarkable finding is that although the users sometimes spent a lot of time figuring something out (three users said that in real situations they would have decided to leave the website because it was too hard to understand), when I told them once how it was done, they immediately understood and the next time they needed the function, they used it without any problems. If this is true in general the outcome of the test should be interpreted different. Now, even though, most of the problems are related to the lack of information and are also the most severe problems, these are not durable problems. It is a great barrier when a user first uses an application, but once it is made clear, the overall understanding will result in flawless use of the web application. For example a simple help function meant for first timers could greatly improve a website. 4.5 User satisfaction After the sessions, every test user was asked to mark the idea of working with theories in the way you can in Theorymaps and to mark the perceived usability. The marks presented in the table below are within a scale of 1 – 7 with seven as highest score. <table> <thead> <tr> <th>Name</th> <th>Idea</th> <th>Usability</th> </tr> </thead> <tbody> <tr> <td>Test user 1</td> <td>5</td> <td>4</td> </tr> <tr> <td>Test user 2</td> <td>7</td> <td>5</td> </tr> <tr> <td>Test user 3</td> <td>6.5</td> <td>5</td> </tr> <tr> <td>Test user 4</td> <td>4</td> <td>6</td> </tr> <tr> <td>Test user 5</td> <td>6</td> <td>4</td> </tr> </tbody> </table> The results presented in the Table 4 are remarkable because most of the test users were asked during the test if they would have given up at that particular moment because they got stuck or they encountered too much difficulty, of which three said: ‘yes’. Furthermore, in some tests the user did not understand the purpose of the website and therefore would have stopped if it was not a test. Still the marks that are given are good to very good. This could be another proof that a user has to have enough information to make the website usable and therefore satisfactory. If a user says half way browsing he would quit if it was not a test and twenty minutes later he gives a good mark not only for the idea but also for the usability, the easiness of influencing the user is clear and the importance of knowing how a website can be useful and how it works is of great importance. 5. DISCUSSION This research has resulted a clear set of usability issues of a Web 2.0 application with complex data structures. The issues give a small insight in what Web application designers should focus on and how users react on this kind of applications. Nevertheless, this research is mainly a case study and therefore more research to support the findings has to be done. The results from Theorymaps showed a high concentration of problems related to information about the purpose of the website and about its functionality. It seems reasonable to believe that this issue will be experienced in more Web 2.0 website with complex data structures, because the element of interactivity that is shared by all these applications seems to decrease the user’s willingness to read and use explaining documentation. This implies that an important aspect of usability in Web 2.0 with complex data is implementing a successful way of communicating important information to users. Further research could verify these findings by testing more applications in the field of Web 2.0 and complex data structures. Hypothesis which could be used are: **H1:** Users use trial and error instead of reading explaining documentation. **H2:** Users who do not know the purpose of the website from the home page will encounter more difficulties than user who do know. **H3:** Users who understand the concept of a function will encounter no problems when using the same function or a function with the same concept again. **H4:** Users are willing to do an explaining website tour the first time they visit the website (already focusing on a possible solution). If these hypotheses would be researched, designers would have more supporting evidence and information about the problem of communicating necessary information to user. Knowing more precisely where the difficulties are would be the basis for improved usability. 6. CONCLUSIONS As discussed in Chapter 1, one of the most important characteristics of Web 2.0 found in literature is the ease of use; a website has to be straightforward. In this research I observed that most severe problems occurred because of the lack of sufficient information about the purpose and working of the web application. The research not only illustrated Web 2.0 usability problems, the research also made clear that in an interactive environment users will use mainly ‘trial and error’ to discover how an application works instead of searching for and reading the text that explains it. This could be an important reason why Web 2.0 applications should be intuitive and straightforward. Consequently, making decisions about what information stands out and the way it stands out becomes more important than it already was. Further research about user behavior in Web 2.0 applications with complex data structures has to be done to verify and define the problems of these applications more precisely. 7. ACKNOWLEDGMENTS I would like to thank the students and University Staff who participated in the usability tests, for their time and dedication to the test. I also want to thank Roland Müller for his guidance and support throughout the project. 8. REFERENCES Appendix A: Script test sessions This script is a test script provided by Steven Krug [15]. The first part is about comforting the test user and getting some information about his surfing experience. After the questions, the user is asked to do five main (problematic) tasks and during each task he will be asked more specific (simple) questions. The text below is only a script and has been continuously adjusted during the sessions in order to retrieve as many information as possible. Hi, _______. My name is Wouter Kool, and I am going to be walking you through this session. You probably already know, but let me explain why we have asked you to come here today: We are testing a web site that is already in use, but we want to know how to improve it. I want to make it clear right away that we are testing the site, not you. You cannot do anything wrong here. In fact, this is probably the one place today where you do not have to worry about making mistakes. We want to hear exactly what you think, so please do not worry that you are going to hurt our feelings. We want to improve it, so we need to know honestly what you think. As we go along, I am going to ask you to think out loud, to tell me what is going through your mind. This will help us. If you have questions, just ask. I may not be able to answer them right away, since we are interested in how people do when they do not have someone sitting next to them, but I will try to answer any questions you still have when we are done. You may have noticed the Video camera. With your permission, we are going to record what happens on the computer screen and what you have to say. The recording will be used only to help us figure out how to improve the site, and it will not be seen by anyone except me and my supervisor. It also helps me, because I do not have to take many notes. Do you have any questions before we begin? Before we look at the site, I would like to ask you just a few quick questions. First, what do you study? Good. Now, roughly how many hours a week would you say you spend using the Internet? How do you spend that time? In a typical day, for instance, tell me what you do, at work and at home. Do you have any favorite Web sites? OK, great. We are done with the questions, and we can start looking at things. First, I am just going to ask you to look at this page and tell me what you think it is, what strikes you about it, and what you think you would click on first. And again, as much as possible, it will help us if you can try to think out loud so we know what you are thinking about. From this point the text below shows the main questions/subject. Every question will have a conversation and sub questions in reaction of what happens during the test. The sub questions cannot be determined in advance. - Now you know what the website is about. Can you try to sign up? - Do you understand what you can do on this page? (Scopus tool) - Could you do what it says and add a source via Scopus? - Could you try to add the text I send you in the email as a source? - Could you try to make your own Theory Map about the text I send you in the email? - Could you try to find the page where you can predict the behavior of a variable in your theory when adjusting another one? - Could you try to find the theories about Health? - Could you find the correlation between ‘Smoking’ and ‘Cancer’ in these theories? During the testing of the four main questions, more specific questions (simple tasks) will be asked, for example: - Can you explain what you see? - Do you like this/that? - Do you understand what you can do? - What kind of actions would be the first you would do? - Why do(n’t) you click that button? - Do you know what that button does? I have one last question for you, if you had to rate your satisfaction concerning this website on a scale of 1 which is satisfactory to 7 which is most unsatisfactory, what number would it be?
{"Source-Url": "http://referaat.cs.utwente.nl/conference/12/paper/6988/w-l-c-kool-usability-and-user-generated-content-web-2-0-sites-with-complex-data-structures.pdf", "len_cl100k_base": 10470, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39370, "total-output-tokens": 12721, "length": "2e13", "weborganizer": {"__label__adult": 0.0005245208740234375, "__label__art_design": 0.0079345703125, "__label__crime_law": 0.0004878044128417969, "__label__education_jobs": 0.056182861328125, "__label__entertainment": 0.0006079673767089844, "__label__fashion_beauty": 0.0003788471221923828, "__label__finance_business": 0.0011472702026367188, "__label__food_dining": 0.0006766319274902344, "__label__games": 0.0014801025390625, "__label__hardware": 0.0015621185302734375, "__label__health": 0.0007863044738769531, "__label__history": 0.0013866424560546875, "__label__home_hobbies": 0.0003094673156738281, "__label__industrial": 0.00055694580078125, "__label__literature": 0.0031070709228515625, "__label__politics": 0.0004611015319824219, "__label__religion": 0.0008792877197265625, "__label__science_tech": 0.213623046875, "__label__social_life": 0.0004591941833496094, "__label__software": 0.1251220703125, "__label__software_dev": 0.58056640625, "__label__sports_fitness": 0.00031447410583496094, "__label__transportation": 0.0008006095886230469, "__label__travel": 0.000457763671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54325, 0.04136]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54325, 0.56495]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54325, 0.93781]], "google_gemma-3-12b-it_contains_pii": [[0, 5891, false], [5891, 12814, null], [12814, 19198, null], [19198, 25026, null], [25026, 30758, null], [30758, 34713, null], [34713, 39974, null], [39974, 45874, null], [45874, 50375, null], [50375, 54325, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5891, true], [5891, 12814, null], [12814, 19198, null], [19198, 25026, null], [25026, 30758, null], [30758, 34713, null], [34713, 39974, null], [39974, 45874, null], [45874, 50375, null], [50375, 54325, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54325, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54325, null]], "pdf_page_numbers": [[0, 5891, 1], [5891, 12814, 2], [12814, 19198, 3], [19198, 25026, 4], [25026, 30758, 5], [30758, 34713, 6], [34713, 39974, 7], [39974, 45874, 8], [45874, 50375, 9], [50375, 54325, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54325, 0.19266]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6d4d3c513695a550b7db23ae52b68073be199b4d
[REMOVED]
{"Source-Url": "https://hal.science/hal-01737737/file/submission.pdf", "len_cl100k_base": 14396, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 64277, "total-output-tokens": 17521, "length": "2e13", "weborganizer": {"__label__adult": 0.0004639625549316406, "__label__art_design": 0.0005817413330078125, "__label__crime_law": 0.0006799697875976562, "__label__education_jobs": 0.0016984939575195312, "__label__entertainment": 0.00012290477752685547, "__label__fashion_beauty": 0.00024890899658203125, "__label__finance_business": 0.0005612373352050781, "__label__food_dining": 0.0005698204040527344, "__label__games": 0.0011758804321289062, "__label__hardware": 0.0015926361083984375, "__label__health": 0.0015344619750976562, "__label__history": 0.0005698204040527344, "__label__home_hobbies": 0.0002187490463256836, "__label__industrial": 0.0010957717895507812, "__label__literature": 0.0003736019134521485, "__label__politics": 0.0005164146423339844, "__label__religion": 0.000782012939453125, "__label__science_tech": 0.415771484375, "__label__social_life": 0.00015974044799804688, "__label__software": 0.00989532470703125, "__label__software_dev": 0.5595703125, "__label__sports_fitness": 0.0004892349243164062, "__label__transportation": 0.001079559326171875, "__label__travel": 0.00025725364685058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56018, 0.06151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56018, 0.3431]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56018, 0.84332]], "google_gemma-3-12b-it_contains_pii": [[0, 1067, false], [1067, 3656, null], [3656, 6937, null], [6937, 9991, null], [9991, 12949, null], [12949, 15663, null], [15663, 18732, null], [18732, 22000, null], [22000, 24107, null], [24107, 27512, null], [27512, 30406, null], [30406, 32742, null], [32742, 35548, null], [35548, 40398, null], [40398, 43617, null], [43617, 46840, null], [46840, 49666, null], [49666, 53126, null], [53126, 56018, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1067, true], [1067, 3656, null], [3656, 6937, null], [6937, 9991, null], [9991, 12949, null], [12949, 15663, null], [15663, 18732, null], [18732, 22000, null], [22000, 24107, null], [24107, 27512, null], [27512, 30406, null], [30406, 32742, null], [32742, 35548, null], [35548, 40398, null], [40398, 43617, null], [43617, 46840, null], [46840, 49666, null], [49666, 53126, null], [53126, 56018, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56018, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56018, null]], "pdf_page_numbers": [[0, 1067, 1], [1067, 3656, 2], [3656, 6937, 3], [6937, 9991, 4], [9991, 12949, 5], [12949, 15663, 6], [15663, 18732, 7], [18732, 22000, 8], [22000, 24107, 9], [24107, 27512, 10], [27512, 30406, 11], [30406, 32742, 12], [32742, 35548, 13], [35548, 40398, 14], [40398, 43617, 15], [43617, 46840, 16], [46840, 49666, 17], [49666, 53126, 18], [53126, 56018, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56018, 0.17981]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
41387e8855fc7bd4c404681803228b530d6f240e
Optimal and fast throughput evaluation of CSDF Citation for published version: Digital Object Identifier (DOI): 10.1145/2897937.2898056 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: DAC '16: The 53rd Annual Design Automation Conference 2016 General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Optimal and fast throughput evaluation of CSDF Bruno Bodin School of Informatics, University of Edinburgh Edinburgh, United Kingdom bbodin@inf.ed.ac.uk Alix Munier-Kordon Sorbonne Universités, UPMC, UMR 7606, LIP6 Paris, France alix.munier@lip6.fr Benoît Dupont de Dinechin KALRAY SA, 445 rue Lavoisier, Montbonnot, France. benoit.dinechin@kalray.eu ABSTRACT The Synchronous Dataflow Graph (SDFG) and Cyclo-Static Dataflow Graph (CSDFG) are two well-known models, used practically by industry for many years, and for which there is a large number of analysis techniques. Yet, basic problems such as the throughput computation or the liveness evaluation are not well solved, and their complexity is still unknown. In this paper, we propose K-Iter, an iterative algorithm based on K-periodic scheduling to compute the throughput of a CSDFG. By using this technique, we are able to compute in less than a minute the throughput of industry applications for which no result was available before. Keywords Cyclo-Static Dataflow Graph, Static analysis, Throughput 1. INTRODUCTION The execution of streaming applications such as multimedia streaming or signal processing by an embedded system must respect strict constraints of throughput, latency, memory usage, and power consumption. Several dataflow programming languages have been developed to express this class of applications in order to handle such requirements. Dataflow modeling involves designing an application as a set of tasks which communicate only through channels. Synchronous Dataflow Graph (SDFG) and more generally Cyclo-Static Dataflow Graph (CSDFG) are two models to statically specify an application behavior. They are commonly used to evaluate applications in term of throughput or memory consumption using dataflow static analysis techniques. The exact determination of the throughput requires to compute an optimal schedule. Most of the authors considered as soon as possible schedules [14, 8]. Their computation has an exponential complexity with respect to the dataflow size in the worst case, and is usually too long for real-life applications. Approximate methods were also developed to get polynomial-time evaluations. They usually limit their space of solutions to particular categories of schedules such as the periodic ones [1] (or equivalently strictly periodic). A periodic schedule is a cyclic schedule whose definition is only composed of a first execution time per task and a period of execution per task. Polynomial methods have been proposed to compute periodic schedules of SDFG [1] and CSDFG [4], and these methods can be applied to throughput estimation or buffer sizing. Meanwhile, existing static dataflow languages are revealing new cases which are not well supported by these techniques [4]. These cases are too complex to be used with exact schedules: as an example, there is no optimal throughput result for the H264 application proposed by [4]. Furthermore, periodic solutions remain an over-approximation which might be insufficient to be applied for embedded systems. K-periodic scheduling [3] was developed as an alternative method when both as soon as possible and periodic methods are not satisfactory. A K-periodic schedule is built by periodically repeating a schedule of its $K_t$ first executions. The estimated value of the throughput directly depends on the periodicity vector $K$. A periodic schedule can be seen as a particular case of K-periodic schedule for which $K_t = 1$ for every task. Setting $K$ equal to what is called the repetition vector provides the optimal value of the throughput, but space and time complexities are not scalable. An exponential number of pertinent values $K$ may be considered between these two extreme solutions to exactly evaluate the throughput. This paper aims to optimally compute the throughput by iteratively increasing a periodicity vector $K$ until an optimal solution is reached. From a theoretical point of view, the computation of a K-periodic schedule of minimum period is presented, followed by an original optimality test of a periodicity vector. Our algorithm is successfully compared for both SDFG and CSDFG with classical approaches and solves several classical benchmarks. The contributions of this paper are: - an extension of K-periodic scheduling to CSDFG, - an optimality test of a K-periodic schedule, - and K-Iter, an heuristic to efficiently explore the possible K-periodic schedules for a CSDFG. Section 2 introduces the model, notations and some definitions on K-schedules. Our algorithm is presented in Section 3. Section 4 is devoted to the experimentations. Related works are presented in Section 5. Section 6 is our conclusion. 2. CYCLO-STATIC DATAFLOW GRAPHS This section is devoted to the presentation of the model and notations of Cyclo-Static Dataflow Graphs, followed by the definition of the throughput. K-periodic schedules are lastly introduced. 2.1 Model definition A Cyclo-Static Dataflow Graph (CSDFG) is a directed graph in which nodes model tasks, and arcs correspond to buffers. It is denoted by \( G = (T, B) \) where \( T \) (resp. \( B \)) is the set of nodes (resp. arcs). Every task \( t \in T \) is decomposed into \( \varphi(t) \) phases; for every value \( p \in \{1, \ldots, \varphi(t)\} \), the \( p \)th phase of \( t \) is denoted by \( t_p \) and has a constant duration \( d(t_p) \in \mathbb{N} \). One iteration of the task \( t \in T \) corresponds to the ordered executions of the phases \( t_1, \ldots, t_{\varphi(t)} \). Furthermore, every task \( t \in T \) is executed several iterations: for every integer \( n \) and for every phase \( p \), \( (t_p, n) \) denotes the \( n \)th execution of the \( p \)th phase of \( t \). Every buffer \( b = (t, t') \in B \) represents a buffer of unbounded size from the task \( t \) to \( t' \) with an initial number of stored data, \( M_0(b) \in \mathbb{N} \). \( \forall p \in \{1, \ldots, \varphi(t)\} \), \( i_{in}(p) \) data are written in \( b \) at the end of an execution of \( t_p \). Similarly, \( \forall p' \in \{1, \ldots, \varphi(t')\} \), \( o_{out}(p') \) data are read from \( b \) before the execution of \( t'_p \). In addition, we set \[ i_b = \sum_{p=1}^{\varphi(t)} i_{in}(p) \quad \text{and} \quad o_b = \sum_{p=1}^{\varphi(t')} o_{out}(p'). \] A Synchronous Dataflow Graph (SDFG) can be seen as a special case of CSDFG where each task has only one phase: \( \forall t \in T, \varphi(t) = 1 \). Figure 1 shows a buffer \( b \) between the two tasks \( t \) and \( t' \). The respective numbers of phases of the two tasks are \( \varphi(t) = 3 \) and \( \varphi(t') = 2 \). The two associated vectors of \( b \) are \( i_{in} = [2, 3, 1] \) and \( o_{out} = [2, 5] \), thus \( i_b = 6 \) and \( o_b = 7 \). The initial number of data is \( M_0(b) = 0 \). \[ \begin{tikzpicture} \node (t) at (0,0) [shape=circle,draw] {\( t \)}; \node (t1) at (1,0) [shape=circle,draw] {\( [2,3,1] \)}; \node (t2) at (2,0) [shape=circle,draw] {\( [2,5] \)}; \node (tp) at (2,1) [shape=circle,draw] {\( t' \)}; \draw [->] (t) -- (t1); \draw [->] (t1) -- (t2); \draw [->] (t2) -- (tp); \end{tikzpicture} \] Figure 1: An simple buffer between two tasks \( t \) and \( t' \). 2.2 Schedules and consistency A feasible (or valid) schedule associated with a CSDFG is a function \( S \) that associates, for every triple \((t, p, n)\) with \( t \in T, p \in \{1, \ldots, \varphi(t)\} \) and \( n \in \mathbb{N} \), \( S(t_p, n) \in \mathbb{R} \), the starting time of \( (t_p, n) \), such that the number of data in every buffer \( b \in B \) remains non-negative, i.e. no data are read before they are produced. Consistency is a necessary (but non-sufficient) condition for the existence of a valid schedule within bounded memory that was first established for SDFG [10]. It has been extended to CSDFG [2] by considering the cumulative number of data produced/consumed by one iteration of its tasks. A CSDFG is consistent if there exists a repetition vector \( q \in (\mathbb{N} - \{0\})^{\mid T \mid} \) such that \[ \forall b = (t, t') \in B, \quad q_t \times i_b = q_{t'} \times o_b. \] The repetition vector defines the number of task executions in a sequence that preserves data quantities in each buffer. 2.3 Throughput of a CSDFG The throughput of a task \( t \in T \) associated with a schedule \( S \) is usually defined as \[ Th^S_t = \lim_{n \to \infty} \frac{n}{S(t_t, n)}. \] Theorem 1 proved by Stuijk et al. [16] characterizes the relations between the throughput of different tasks using the repetition vector. \[\text{Theorem 1} \quad ([16]). \quad \text{Let } G = (T, B) \text{ be a consistent CSDFG and } S \text{ a valid schedule. For any pair of tasks } (t, t'), \quad \frac{\theta^S_{t'}}{q_{t'}} = \frac{\theta^S_t}{q_t}, \quad \text{where } q \text{ is the repetition vector of } G.\] The throughput of a valid schedule \( S \) is then equal to \( Th^S_G = \frac{\theta^S_{t'}}{q_{t'}} \) for any task \( t \in T \). The period is \( M_G^S_t = \frac{1}{\frac{\theta^S_{t'}}{q_{t'}}} \). Let’s consider a consistent CSDFG \( G = (T, B) \) initially marked by \( M_0(b), b \in B \). The problem addressed by the present paper is to evaluate the maximum reachable throughput of \( G \) which is denoted by \( Th^S_G \). The most common scheduling policy consists of executing the tasks as soon as possible. Figure 3 presents first executions of the as soon as possible schedule for the CSDFG pictured in Figure 2. The as soon as possible schedule maximizes the throughput. Nevertheless, its description can be of exponential size, as it depends on the repetition vector rather than the problem size. So other scheduling policies must be considered to reduce the computation time of the maximum throughput. 2.4 K-periodic scheduling Let \( K = [K_1, \ldots, K_{\mid T \mid}] \in (\mathbb{N} - \{0\})^{\mid T \mid} \). A schedule \( S \) is \( K \)-periodic with a fixed vector \( K \) if, for any task \( t \in T \), \[ \text{Figure 2: A consistent CSDFG with } d(A) = [1, 1], \quad d(B) = [1, 1, 1], \quad d(C) = [1] \text{ and } d(D) = [1]. \] \[ \text{Figure 3: An as soon as possible feasible schedule for the CSDFG pictured in Figure 2.} \] For instance, the repetition vector of the CSDFG presented in Figure 2 is \( q = [6, 12, 6, 1] \), the graph is thus consistent. So, as consistency is a necessary condition for the existence of a valid schedule, our study focuses on consistent CSDFG. Figure 4: A K-periodic schedule for the CSDFG pictured in Figure 2 of periodicity factor \( K = [2, 1, 1, 1] \). Highlighted executions correspond to initial starting times, others are derived from the period \( \mu^S \). and for any integers \( p \in \{1, \cdots, \varphi(t)\} \) and \( \beta \in \{1, \cdots, K_t\} \), the period \( \mu^S_{(p, \beta)} \) and values \( S((t_p, \beta)) \) are fixed. Then for any integer \( n \in \mathbb{N} \) such that \( n = \alpha \times K_t + \beta \) with \( \alpha \in \mathbb{N} \) and \( \beta \in \{1, \cdots, K_t\} \), we get \[ \forall p \in \{1, \cdots, \varphi(t)\}, \quad S((t_p, n)) = S((t_p, \beta)) + \alpha \mu^S_{(p, \beta)}. \] If the periodicity vector \( K \) is unitary (i.e. \( K_t = 1, \forall t \in T \)), the schedule is said to be periodic, or 1-periodic. The throughput of any task \( t \in T \) for a valid K-periodic schedule \( S \) verifies \( T_{h^S} = \frac{K_t}{q_t \times \mu^S_t} \). The throughput of \( S \) is then \[ T_{h^S} = \frac{K_t}{\alpha \times \mu^S_t}. \] Alternatively, the period of \( S \) is denoted by \[ \Omega^S = \frac{1}{T_{h^S}} = \frac{q_t \times \mu^S_t}{K_t} \quad \forall t \in T. \] As example, the periodicity factor of task \( A \) in Figure 4 equals 2, thus the \( K_A \times \varphi(A) = 4 \) first executions of \( A \) are fixed; starting times of all the successive ones are implicitly defined using the period \( \mu^S_A = 12 \). The period of \( S \) is thus \( \Omega^S = \frac{12 \times 4}{2} = 36 \). It is important to note that for a 1-periodic schedule the maximal reachable throughput of \( G \) was only \( \Omega^S = 108 \). All these notations will be used in the Section 3 to present our contributions. 3. THROUGHPUT EVALUATION METHOD This section presents our algorithm. Subsection 3.1 recalls the characterization of periodic schedules of a CSDFG, which is extended to K-periodic schedules in subsection 3.2 using a simple transformation. Next subsection treats the computation of the minimum period of a K-periodic schedule. Subsection 3.4 is devoted to an original optimality test of a fixed \( K \). Subsection 3.5 is devoted to our algorithm K-Iter which heuristically explores the possible K-periodic schedules for a CSDFG in order to optimally provide its maximal throughput. 3.1 Periodic scheduling of a CSDFG The following Theorem 2 defines a feasible periodic schedule as a set of linear constraints. This set of constraints composes a linear program which solve the maximal throughput of a periodic schedule. In order to define the Theorem 2, several definition are required. First, the total number of data produced by \( t \) in the buffer \( b \) at the completion of \( (t_p, n) \) is defined as \[ I_a((t_p, n)) = \sum_{\alpha=1}^{t} \text{ins}_a(\alpha) + (n - 1) \times \text{ins}_b. \] Similarly, the number of data consumed by \( t' \) in the buffer \( b \) at the completion of \( (t'_p, n') \) is defined by \( O_a((t'_p, n')) = \sum_{\alpha=1}^{t'} \text{outs}_a(\alpha) + (n' - 1) \times \text{ins}_b \). The total number of data contained in a buffer must remain non-negative. That is, any execution \( (t'_p, n') \) can be done at the completion of \( (t_p, n) \) if and only if \( M_0(a) + I_a((t_p, n)) - O_a((t'_p, n')) \geq 0 \). For example, considering the CSDFG pictured in Figure 1, the execution \( (t'_2, 1) \) can be done at the completion of \( (t_1, 2) \) since \( M_0(a) + I_a((t_2, 2)) - O_a((t'_2, 1)) = 0 + 8 - 7 \geq 0 \). For any pair of values \((\alpha, \gamma) \in \mathbb{Z} \times \mathbb{N} \) - \{0\}, we set \( [\alpha]^{\gamma} \) and \([\alpha]^{\gamma} \) as: \[ [\alpha]^{\gamma} = \left\lfloor \frac{\alpha}{\gamma} \right\rfloor \times \gamma \quad \text{and} \quad [\alpha]^{\gamma} = \left\lceil \frac{\alpha}{\gamma} \right\rceil \times \gamma. \] Let us consider a buffer \( b = (t, t') \in B \). For any pair \((p, p') \in \{1, \cdots, \varphi(t)\} \times \{1, \cdots, \varphi(t')\} \), let us define \[ Q^G_a(p, p') = O_a((t'_p, 1)) - I_a((t_p, 1)) - M_0(b) + \text{ins}_b(p). \] We also note \( \text{gcd}_a = \text{gcd}(\text{ins}_b, \text{ins}_b) \). \[ \alpha^G_a(p, p') = \left[ Q^G_a(p, p') - \min(\text{ins}_b(p), \text{outs}_b(p')) \right] \text{gcd}_a \] and \[ \beta^G_a(p, p') = \left[ Q^G_a(p, p') - 1 \right] \text{gcd}_a \] We now recall Theorem 2 which characterizes any feasible periodic schedule. **Theorem 2** ([13]). Let \( G \) be a consistent CSDFG. Any periodic schedule \( S \) of period \( \Omega^S \) is feasible if and only if, for any buffer \( b = (t, t') \in B \) and for every pair \((p, p') \in \{1, \cdots, \varphi(t)\} \times \{1, \cdots, \varphi(t')\} \) with \( \alpha^G_a(p, p') \leq \beta^G_a(p, p') \), \[ S((t'_p, 1)) - S((t_p, n)) \geq d(t_p) + \Omega^S \times \frac{\beta^G_a(p, p')}{q_t \times \text{ins}_b}. \] 3.2 Extension to K-Periodic scheduling The extension of Theorem 2 to K-periodic schedules with a fixed periodicity vector \( K \) comes from a transformation of the initial CSDFG \( \tilde{G} = (T, B) \) to another equivalent one \( \tilde{G} = (T, \tilde{B}) \) of the same structure for which the adjacent vectors of any task \( t \) are duplicated \( K_t \) times. For any vector \( v \) of size \( s \), and any integer \( P > 0 \), \([v]^P \) denotes the vector of size \( s \times P \) obtained by duplicating \( v \) exactly \( P \) times, i.e. \( \forall k \in \{1, \cdots, s\} \). \[ [v]^P = [v]^P(k + s) = \cdots = [v]^P(k + (P - 1) \times s) = v(k). \] For any task \( t \in T \), we set \( \tilde{\varphi}(t) = K_t \times \varphi(t) \) and for any \( p \in \{1, \cdots, \varphi(t)\} \), \( \tilde{d}(t) = \lceil d(t) \rceil \). For any buffer \( b = (t, t') \in B \), we set \( \tilde{\text{ins}}_b = [\text{ins}_b]^K_t \), \( \tilde{\text{outs}}_b = [\text{outs}_b]^K_t \), and \( \tilde{\text{M}}_b = M_b \). A consequence of this transformation is \( \tilde{\text{ins}}_b = K_t \times \text{ins}_b \) and \( \tilde{\text{M}}_b = K_t \times \text{M}_b \). \( \tilde{G} \) is a consistent graph. Indeed, by definition of \( q \), for any buffer \( b = (t, t') \in B \), \( q_t \times \text{ins}_b = q_{t'} \times \text{ins}_b \), and thus \[ q_t \times \text{lcm}(K_t) \times \text{ins}_b = q_{t'} \times \text{lcm}(K_t) \times \text{ins}_b, \] where \( \text{lcm}(K_t) \) is the least common multiple of values \( K_t \), \( t \in T \). Let \( \tilde{q}_t = q_t \times \text{lcm}(K_t) \) for \( t \in T \) be a repetition vector of \( \tilde{G} \). Let us set for any buffer $b = (t, t') \in \overline{B}$, $$\mathcal{V}(a) = \{(p, p') \in \{1, \cdots, \bar{\varphi}(t)\} \times \{1, \cdots, \bar{\varphi}(t')\} : \alpha_a^c(p, p') \leq \beta_a^c(p, p')\}.$$ The determination of the minimum period $\Omega^5$ of a periodic schedule can be modeled with the following linear program following Theorem 2: $$\begin{align*} \text{Minimize } & \Omega^5_S \\ \text{subject to: } & \forall a = (t, t') \in \overline{B}, \ \forall (p, p') \in \mathcal{V}(a), \\ & \tilde{S}(t, t', 1) - \tilde{S}(t, t', 1) \geq d(t_p) + \Omega^5_S \times \frac{\beta_a^c(p, p')}{i_a \times \bar{q}_t} \\ & \forall t \in \mathcal{T}, \forall p \in \{1, \cdots, \bar{\varphi}(t)\}, \tilde{S}(t, t', 1) \in \mathbb{R}^+ \\ & \Omega^5_S \in \mathbb{R}^+ - \{0\} \end{align*}$$ The next theorem highlights the relationship between the periods of $\tilde{G}$ and $G$: **Theorem 3.** Let $\tilde{S}$ be a 1-periodic feasible schedule of $\tilde{G}$ of period $\Omega^5_S$. Starting times of $\tilde{S}$ define a $K$-periodic feasible schedule $S$ of $G$ with normalized period $\Omega^5_S = \Omega^5_S / \text{lcm}(K)$. **Proof.** By construction of $\tilde{G}$, any periodic feasible schedule $\tilde{S}$ is a $K$-periodic feasible schedule $S$ of $G$. Then, for any task $t \in \mathcal{T}$, $\mu^c_t = \mu^c_{\tilde{t}}$ and $\Omega^5_S = \frac{\text{lcm}(K)}{K_t}$, thus $$\Omega^5_S = \bar{q}_t \times \mu^c_{\tilde{t}} = q_t \times \text{lcm}(K) \times \mu^c_t = \Omega^5_S \times \text{lcm}(K).$$ $\square$ ### 3.3 Resolution of the linear program The linear program for the determination of a minimum period can be transformed to a Max Cost-to-time Ratio Problem (MCRP in short), which is a polynomially solved problem [5]. Considering a bi-valued directed graph $G = (N, E)$ where any arc $e \in E$ is bi-valued by $L(e)$ and $H(e)$, the Cost-to-time Ratio of any circuit $c = (e_1, e_2, \cdots, e_p)$ is defined as $$R(c) = \frac{\sum_{i=1}^p L(e_i)}{\sum_{i=1}^p H(e_i)}.$$ Let $\mathcal{C}(G)$ be the set of elementary circuits of $G$. The maximum Cost-to-time Ratio of a graph $H$ is then $$\lambda_H = \max_{c \in \mathcal{C}(G)} R(c).$$ An elementary circuit $c \in \mathcal{C}(G)$ is critical if $R(c) = \lambda_H$. The bi-valued directed graph $G = (N, E)$ associated with our linear program is defined as follows: - $N = \{(t_p, 1), t \in \mathcal{T}, p \in \{1, \cdots, \bar{\varphi}(t)\}\}$ is the set of nodes; - $E = \{(t_p, 1), (t'_p, 1), a = (t, t') \in \overline{B}, (p, p') \in \mathcal{V}(a)\}$ is the set of arcs; any arc $e = ((t_p, 1), (t'_p, 1)) \in E$ is bi-valued by $$\begin{align*} (L(e), H(e)) &= (d(t_k), -\frac{\beta_a^c(p, p')}{i_a \times \bar{q}_t}). \end{align*}$$ The determination of the minimum period $\Omega^5_S$ is then equivalent to the computation of the maximum Cost-to-time Ratio, i.e. $\Omega^5_S = \lambda_H$. Figure 5 presents the bi-valued graph $H$ that corresponds to the CSDFG pictured in Figure 2 with $K = \{1, 1, 1, 1\}$. The maximum Cost-to-time Ratio equals 108 and is reached by the circuit $c = \{A_1, D_1, C_1\}$ with $H(c) = \frac{1}{3}$ and $L(c) = 3$. This therefore implies that the minimum period of a feasible periodic schedule for the CSDFG is $\Omega^5_S = 108$. ### 3.4 K-periodic Schedule optimality test A method based on the MCRP returns critical circuits, i.e. circuits $c$ for which the value $R(c)$ is maximum. We take advantage of them to testify the optimality of a $K$-periodic schedule. The next theorem will allow us to test if the maximum throughput associated with a periodicity vector $K$ is the maximal reachable throughput of the graph $G$. **Theorem 4 (Optimality test).** Let $G = (T, B)$ be a consistent CSDFG, a periodicity vector $K$ and the associated bi-valued graph $G = (N, E)$. Let us suppose that $c \in \mathcal{C}(G)$ is a critical circuit such that for every execution $(t_p, 1)$ of $c$, $K_t$ is a multiple of $\bar{q}_t = \frac{q_t}{\gcd(q_t, t' \in c)}$. Then, the maximum reachable throughput of $G$ equals $\frac{\text{lcm}(K)}{H(c)}$. **Proof.** By using Theorem 3, the minimum period of the CSDFG $G$ associated with $G$ and $K$ verifies $\Omega^5_S = R(c)$. Let $C$ be a sub-graph of $G$ composed of the task from the circuit $c$. The minimum period of $C$ is obtained for a K-periodic schedule with $K$ following the assumption of the theorem. $\square$ For instance, let us consider the bi-valued graph from Figure 5. With the critical circuit $A, D, C$, we observe $\bar{q}_B = 2$ and $K_B = 1$, thus $K_B$ is not a multiple of $\bar{q}_B$, the optimality test is not checked. ### 3.5 The K-iter algorithm Algorithm 1 computes iteratively a sequence of critical circuits by increasing the periodicity factor until the optimality test from Theorem 4 is fulfilled. The update of the periodicity factor ensures that the circuit $c$ will realize the optimality test if it remains critical at the next step. The values of the periodicity factor necessarily increase at every time the loop test is false. The convergence of the algorithm is guaranteed since the number of elementary circuits of a graph $G$ is bounded and each circuit $c$ is modified at most once; indeed, any modified circuit tested subsequently in the algorithm will fulfill the optimality test. ![Figure 5: A bi-valued graph $H$ that corresponds to the CSDFG pictured in Figure 2 when the periodicity vector is $\{1,1,1,1\}$. The maximum Cost-to-time Ratio is reached by the circuit $c = \{A_1, D_1, C_1\}$ and is equal to $\Omega^5_S = 108.$](image-url) 4. EXPERIMENTAL RESULTS The K-Iter algorithm is implemented as a C++ application and is available online\(^1\). We compared K-Iter with the state-of-the-art throughput evaluation methods for SDFG [7, 6] and CSDFG [16, 4]. SDFG\(^3\) benchmarks [8] are considered for SDFG. Our experimentations are summarized in Table 1 and is composed of four categories of graphs, including an actual DSP category. For CSDFG, we considered IB+AG5CSDF [4]; our results are presented in Table 2, which is also composed of actual and synthetic applications. All these experiments were performed on an Intel i5-4570 computer with 16GB RAM. ### 4.1 Evaluation of SDFG We compare K-Iter with two optimal SDFG techniques. First, the symbolic execution based method [8], which consists of executing an application until it reaches a previously known state. This ensures a cyclic execution pattern, and then the application throughput can be computed. Second, we consider the cycle-induced sub-graph method [6], which consists of producing a dependency graph, similar to expansion techniques [10], and solving its maximal cycle ratio problem. These experiments are summarized in Table 1. We observe that for the two category MimicDSP and LgTransient, the overall performance of K-Iter is between one and two orders of magnitude better than [6] and [8]. For the LgHSDF category, the performance of [6] and K-Iter are similar when [8] is two orders of magnitude slower. For the ActualDSP category, K-Iter is slower in average. When we look at the detail of these experiments, K-Iter is only slower for a particular graph. For this graph (namely, H263 Decoder) K-Iter computation time is 148 ms when [6] is 4ms and [8] is 36ms. This is the longest computation time observed for K-Iter in the whole SDF3 benchmark. In comparison, the longest duration for [6] was 3 sec and 22 sec for [7]. ### 4.2 Evaluation of CSDFG For the CSDFG evaluation, we compared the K-Iter algorithm with two existing techniques: an approximative method [4] based on periodic scheduling, and an exact technique based on symbolic execution [16]. The symbolic execution technique we used was the publicly available implementation of SDF\(^3\) [15], including a correction of the repetition vector computation method to avoid integer overflow. For this reason, our results differ from [4]. These experiments are summarized in Table 2. The industry cases are considered with and without buffer size constraints. In both cases K-Iter performs well. The K-Iter algorithm is several order of magnitude faster than the symbolic execution. Furthermore, K-Iter provides optimal results for applications which hasn’t been done before (namely, the JPEG200 and the H264 with buffer size constraints). Indeed, if [4] provided an optimal solution for one of them, because this was an approximative method, there was no way to prove optimality until now. For the synthetic graphs, if the periodic method always provides a solution, these solutions are not necessarily optimal. In contrast the K-Iter algorithm does provide optimal solutions for three examples and is always faster than the symbolic execution. For the two most complex examples (\(\sum q > 1\)) K-Iter doesn’t provide solution, nor does the symbolic execution method. 5. RELATED WORK A first throughput evaluation method has been proposed [10] which consists of the transformation of an SDFG to a particular HSDFG (a case of SDFG for which every production and consumption rate is equal to 1) where each node corresponds to a task execution and where edges are precedence relationships. This is the expansion. However, this transformation is not polynomial, its complexity is related to the repetition vector of an SDFG. Later, it was proved that this transformation was considering more arcs and nodes than required and two solutions were proposed to reduce the HSDFG’s size [12, 6]. More recently, a max-plus algebra solution proposed to progressively build an expansion until it reaches optimality [9]. This solution uses pessimistic and optimistic throughput evaluation methods to test optimality. In contrast, a throughput evaluation technique based on symbolic execution has been proposed for SDFG [8] and extended to CSDFG [16]. These methods rely on the fact that the state-space of a consistent (C)SDFG is a finite set. By executing every tasks as soon as possible, a previously known state has to be met again. Then, when a cyclic execution pattern is revealed, the throughput can easily be computed. Yet the minimal distance between two identical states is not polynomially related to the instance size. In consequence the complexity of this method is exponential. When the throughput evaluation is used as a decision function (such as in design space exploration), accurate solutions are no longer required and approximative methods can be used. Several solutions were proposed to reduce the complexity of the problem by ignoring cycles [13] or restricting the considered schedules to periodic schedules [1, 4, 11]. 6. CONCLUSION This article presents K-Iter, an optimal algorithm to fastly evaluate CSDFG throughput and which is based on a K-periodic scheduling technique. If its worst case complexity is comparable to other optimal methods, it has been observed to be more efficient. However several cases exist for which the K-Iter algorithm is as slow as or even slower than other --- \(^1\)https://github.com/bbodin/kiter optimal solutions. We believe such cases are key to study the complexity of the throughput evaluation problem. This is an opportunity for future direction. 7. ACKNOWLEDGEMENTS This work is supported by the EPSRC grant PAMELA EP/K008730/1. We also thank the reviewers for their constructive feedback. 8. REFERENCES
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/25140134/Bodin_et_al_2016_Optimal_and_fast.pdf", "len_cl100k_base": 8608, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 27638, "total-output-tokens": 10615, "length": "2e13", "weborganizer": {"__label__adult": 0.0004875659942626953, "__label__art_design": 0.0007176399230957031, "__label__crime_law": 0.0005550384521484375, "__label__education_jobs": 0.0010213851928710938, "__label__entertainment": 0.00014650821685791016, "__label__fashion_beauty": 0.0002472400665283203, "__label__finance_business": 0.0005602836608886719, "__label__food_dining": 0.0004503726959228515, "__label__games": 0.000941753387451172, "__label__hardware": 0.005138397216796875, "__label__health": 0.0011148452758789062, "__label__history": 0.0005159378051757812, "__label__home_hobbies": 0.00017881393432617188, "__label__industrial": 0.0012426376342773438, "__label__literature": 0.0003032684326171875, "__label__politics": 0.0004668235778808594, "__label__religion": 0.0007605552673339844, "__label__science_tech": 0.4345703125, "__label__social_life": 0.00010156631469726562, "__label__software": 0.007625579833984375, "__label__software_dev": 0.541015625, "__label__sports_fitness": 0.0004529953002929687, "__label__transportation": 0.0012359619140625, "__label__travel": 0.0002894401550292969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32941, 0.0241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32941, 0.64257]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32941, 0.85183]], "google_gemma-3-12b-it_contains_pii": [[0, 1261, false], [1261, 5974, null], [5974, 11779, null], [11779, 18322, null], [18322, 23905, null], [23905, 29359, null], [29359, 32941, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1261, true], [1261, 5974, null], [5974, 11779, null], [11779, 18322, null], [18322, 23905, null], [23905, 29359, null], [29359, 32941, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32941, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32941, null]], "pdf_page_numbers": [[0, 1261, 1], [1261, 5974, 2], [5974, 11779, 3], [11779, 18322, 4], [18322, 23905, 5], [23905, 29359, 6], [29359, 32941, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32941, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
bde3900e04680e225d783969ff70bde190b8a2ea
Efficient Dual-ISA Support in a Retargetable, Asynchronous Dynamic Binary Translator Citation for published version: Digital Object Identifier (DOI): 10.1109/SAMOS.2015.7363665 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS), 2015 International Conference on General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and/or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Efficient Dual-ISA Support in a Retargetable, Asynchronous Dynamic Binary Translator Tom Spink, Harry Wagstaff, Björn Franke and Nigel Topham Institute for Computing Systems Architecture School of Informatics, University of Edinburgh t.spink@sms.ed.ac.uk, h.wagstaff@sms.ed.ac.uk, bfranke@inf.ed.ac.uk, npt@inf.ed.ac.uk Abstract—Dynamic Binary Translation (DBT) allows software compiled for one Instruction Set Architecture (ISA) to be executed on a processor supporting a different ISA. Some modern DBT systems decouple their main execution loop from the built-in Just-In-Time (JIT) compiler, i.e. the JIT compiler can operate asynchronously in a different thread without blocking program execution. However, this creates a problem for target architectures with dual-ISA support such as ARM/TM32H, where the ISA of the currently executing instruction stream may be different to the one processed by the JIT compiler due to their decoupled operation and dynamic mode changes. In this paper we present a new approach for dual-ISA support in such an asynchronous DBT system, which integrates ISA mode tracking and hot-swapping of software instruction decoders. We demonstrate how this can be achieved in a retargetable DBT system, where the target ISA is not hard-coded, but a processor-specific module is generated from a high-level architecture description. We have implemented ARM v5T support in our DBT and demonstrate execution rates of up to 1148 MIPS for the SPEC CPU 2006 benchmarks compiled for ARM/TM32H, achieving on average 192%, and up to 323%, of the speed of QEMU, which has been subject to intensive manual performance tuning and requires significant low-level effort for retargeting. I. INTRODUCTION The provision of a compact 16-bit instruction set architecture (ISA) alongside a standard, full-width 32-bit RISC ISA is a popular architectural approach to code size reduction. For example, some ARM processors (e.g. ARM7TDMI) implement the compact TM32H instruction set whereas MIPS has a similar offering called MIPS16E. Common to these compact 16-bit ISAs is that the processor either operates in 16-bit or 32-bit mode and switching between modes of operation is done explicitly through mode change operations, or implicitly through PC load instructions. For instruction set simulators (ISS), especially those using dynamic binary translation (DBT) technology rather than instruction-by-instruction interpretation only, dynamic changes of the ISA present a challenge. Their integrated instruction decoder, part of the just-in-time (JIT) compiler translating from the target to the host system’s ISA, needs to support two different instruction encodings and keep track of the current mode of operation. This is a particularly difficult problem if the JIT compiler is decoupled from the main execution loop and, for performance reasons, operates asynchronously in a different thread as in e.g. [1] or [2]. For such asynchronously multi-threaded DBT systems, the ISA of the currently executed fragment of code may be different to the one currently processed by the JIT compiler. In fact, in the presence of a JIT compilation task farm [2], each JIT compilation worker may independently change its target ISA based on the encoding of the code fragment it is operating on. Most DBT systems [3], [4], [5], [6], [7], [8], [9], [1], [10], [11], [12], [13], [14], [15] avoid dealing with this added complexity and do not provide support for dual-ISA systems at all. A notable exception is the ARM port of QEMU [16], which supports both ARM and TM32H instructions, but tightly couples its JIT compiler and main execution loop and, thus, misses the opportunity to offload the JIT compiler from the critical path to a separate thread. The added complexity and possible performance implications of handling dual ISAs in DBT systems motivate us to investigate high-level retargetability, where low-level implementation and code generation details are hidden from the user. In our system ISA modes, instruction formats and behaviours are specified using a C-based architecture description language (ADL), which is processed by a generator tool that creates a dynamically loadable processor module. This processor module encapsulates the necessary ISA tracking logic, instruction decoder trees and target instruction implementations. Users of our system can entirely focus on the task of transcribing instruction definitions from the processor manual and are relieved of the burden of writing or modifying DBT-internal code concerning ISA mode switches. In this paper we introduce a set of novel techniques enabling dual-ISA support in asynchronous DBT systems, involving ISA mode tracking and hot-swapping of software instruction decoders. The key ideas can be summarised as follows: First, for ISA mode tracking we annotate regions of code discovered during initial interpretive execution with their target ISA. This information cannot be determined purely statically. Second, we maintain separate instruction decoder trees for both ISAs and dynamically switch between software instruction decoders in the JIT compiler according to the annotation on the code region under translation. Maintaining two instruction decoder trees, one for each ISA, contributes to efficiency. The alternative solution, a combined decoder tree, would require repeated mode checks to be performed as opcodes and fields of both ISAs may overlap. Finally, we demonstrate how dual-ISA support can be integrated in a retargetable DBT system, where both the interpreter and JIT compiler, including their instruction decoders and code generators, are generated from a high-level architecture description. We have implemented full ARM v5T support, including complete coverage of TM32H instructions, in our retargetable, asynchronous DBT system and evaluated it against the SPEC CPU 2006 benchmark suite. Using the ARM port of the GCC compiler we have compiled the A. Translation/Execution Models for DBT Systems Before describing our contributions we review existing translation/execution models for DBT systems with respect to their ability to support target dual instruction sets. 1) Single-mode translation/execution model a) Interpreter only. In this mode the entire target program is executed on an instruction-by-instruction basis. Strictly, this is not DBT as no translation takes place. It is straight-forward to keep track of the current ISA as mode changes take immediate effect and the interpreter can handle the next instruction appropriately based on its current state (see Figure 1(a)). ISS using interpretative execution such as SIMPLESCLAR [9] or ARMISS [15] have low implementation complexity, but suffer from poor performance. b) JIT only. Interpreter-less DBT systems exclusively rely on JIT compilation to translate every target instruction to native code before executing this code. As a consequence, execution in this model will pause as soon as previously unseen code has been discovered and only resume after JIT compilation has completed. ISA mode changes take immediate effect (see Figure 1(b)) and are again simple to implement as native code execution and JIT compilation stages are tightly coupled and mutually exclusive. JIT-only DBT systems are of low complexity and provide better performance than purely interpreted ones, but rely on very fast JIT compilers, which in turn will often perform very little code optimisation. This and the fact that the JIT compiler is on the critical path of the main execution loop within a single thread limits the achievable performance. QEMU [16], STRATA [6], [7], [8], SHADE [4], SPIRE [17], and PIN [18] are based on this model. 2) Mixed-mode translation/execution model a) Synchronous (single-threaded). This model combines both an interpreter and a JIT compiler in a single DBT (see Figure 1(c)). Initially, the interpreter is used for code execution and profiling. Once a region of hot code has been discovered, the JIT compiler is employed to translate this region to faster native code. The advantage of a mixed-mode translation/execution model is that only profitable program regions are JIT translated, whereas infrequently executed code can be handled in the interpreter without incurring JIT compilation overheads [19]. Due to its synchronous nature ISA tracking is simple in this model: the current machine state is available in the interpreter and can be used to select the appropriate instruction decoder in the JIT compiler. As before, the JIT compiler operates in the same thread as the main execution loop and program execution pauses whilst code is translated. This limits overall performance, especially during prolonged code translation phases. A popular representative of this model is DYNAMO [20]. b) Asynchronous (multi-threaded). This model is characterised by its multi-threaded operation of the main execution loop and JIT compiler. Similar to the synchronous mixed-mode case, an interpreter is used for initial code execution and discovery of hotspots. However, in this model the interpreter enqueues hot code regions to be translated by the JIT compiler and continues operation without blocking (see Figure 1(d)). As soon as the JIT compiler installs the native code the execution mode switches over from interpreted to native code execution. Only in this model is it possible to leverage concurrent JIT compilation on multi-core host machines, hiding the latency of JIT compilation and ultimately contributing to higher performance of the DBT system [1], [2]. Unfortunately, this model presents a challenge to implementing dual-ISA support: the current machine state represented in the interpreter may have advanced due to its concurrent operation and cannot be used to appropriately decode target instructions in the JIT compiler. In summary, decoupling the JIT compiler from main execution loop and offloading it to a separate thread has been demonstrated to increase performance in multi-threaded DBT systems. However, it remains an unsolved problem how to efficiently handle dynamic changes of the target ISA without tightly coupling the JIT compiler and, thus, losing the benefits of its asynchronous operation. B. Motivating Example The nature of the ARM and THUMB instruction set is such that it is not possible to statically determine from the binary encoding alone which ISA the instruction is part of. This becomes even more important when it is noted that ARM instructions are 32-bit in length, and THUMB instructions are... 16-bit. For example, consider the 32-bit word `e2810006`. An ARM instruction decoder would decode the instruction as: ``` add r0, r1, #6 ``` whereas, a THUMB instruction decoder would consider the above 32-bit word as two 16-bit words, and would decode as the following two THUMB instructions: ``` mov r6, r0 b.n +4 ``` An ARM processor correctly decodes the instruction by being in one of two dynamic modes: ARM or THUMB. A disassembler, given a sequence of instructions, has no information about what ISA the instructions belong to, and can therefore not make the distinction between ARM and THUMB instructions on a raw instruction stream, and must use debugging information provided with the binary to perform disassembly. If the debugging information is not available (e.g. it has been “striped” from the binary) then the disassembler must be instructed how to decode the instructions (assuming the programmer knows), and if the instructions are mixed-mode, then it will not be able to effectively decode at all. This problem for disassemblers directly translates to the same problem in any DBT with multi-ISA support. A DBT necessarily works on a raw instruction stream – without debugging information – and therefore must use its own mechanisms to correctly decode instructions. In the example of an ARM/THUMB DBT, it may choose to simulate a THUMB status bit as part of the CPSR register existent in the ARM architecture (see Section II), and therefore use the information within the register to determine how the current instruction should be decoded. But as mentioned in Section I-A, this approach does not work in the context of an asynchronous JIT compiler, as the state of the CPSR within the interpreter would be out of sync with the compiler during code translation. C. Overview The remainder of this paper is structured as follows. We review the dual ARM/THUMB ISA as far as relevant for this paper in Section II. We then introduce our new methodology for dual-ISA DBT support in Section III. This is followed by the presentation of our experimental evaluation in Section IV and a discussion of related work in Section V. Finally, in Section VI we summarise our findings and conclude. II. BACKGROUND: ARM/THUMB THUMB is a compact 16-bit instruction set supported by many ARM cores in addition to their standard 32-bit ARM ISA. Internally, narrow THUMB instructions are decoded to standard ARM instructions, i.e. each THUMB instruction has a 32-bit counterpart, but the inverse is not true. In THUMB mode only 8 out of the 16 32-bit general-purpose ARM registers are accessible, whereas in ARM mode no such restrictions apply. The narrower 16-bit instructions offer memory advantages such as increased code density and higher performance for systems with slow memory. The Current Program Status Register (CPSR) holds the processor mode (user or exception flag), interrupt mask bits, condition codes, and THUMB status bit. The THUMB status bit (T) indicates the processor’s current state: 0 for ARM state (default) or 1 for THUMB. A saved copy of CPSR, which is called Saved Program Status Register (SPSR), is for exception mode only. The usual method to enter or leave the THUMB state is via the Branch and Exchange (BX) or Branch, Link, and Exchange (BLX) instructions, but nearly every instruction that is permitted to update the PC may make a mode transition. During the branch, the CPU examines the least significant bit (LSB) of the destination address to determine the new state. Since all ARM instructions are aligned on either a 32- or 16-bit boundary, the LSB of the address is not used in the branch directly. However, if the LSB is 1 when branching from ARM state, the processor switches to THUMB state before it begins executing from the new address; if 0 when branching from THUMB state, the processor changes back to ARM state. The LSB is also set (or cleared) in the LR to support returning from functions that were called from a different mode. When an exception occurs, the processor automatically begins executing in ARM state at the address of the exception vector, even if the CPU is running in THUMB state when that exception occurs. When returning from the processor’s exception mode, the saved value of T in the SPSR register is used to restore the state. This bit can be used, for example, by an operating system to manually restart a task in the THUMB state – if that is how it was running previously. III. METHODOLOGY: DUAL-ISA DBT SUPPORT The DBT consists of an execution engine and a compilation engine. The execution engine will execute either native code (which has been generated from instructions by the compilation engine) or will execute instructions in an interpreter loop. The execution engine interpreter will also generate profiling data to pass to the compilation engine (see Figure 2). The execution engine maintains a machine state structure, within which is contained the current execution mode of the target processor (along with other state information, such as register values etc). The machine state is only available to the execution engine, as the asynchronous compilation engine does not run in sync with the currently executing code. The compilation engine accepts compilation work units generated by the profiling component of the interpreter. A compilation work unit contains a control-flow graph (fundamentally a list of basic-blocks and their associated successor blocks) that are to be compiled. Each basic-block also contains the ISA mode that the instructions within the block should be decoded with. ### A. ISA Mode Tracking The current ISA mode of the CPU is stored in a CPU state variable, which is updated in sequence as the instructions of the program are being executed. When the interpreter needs to decode an instruction (and cannot retrieve the decoding from the decoder cache), the current mode is looked up from the state variable and sent to the decoder service, which then decodes the instruction using the correct ISA decode tree. If an instruction causes a CPU ISA mode change to occur (for example, in the case of the ARM architecture, a BLX instruction) then the CPU state will be updated accordingly. Since the decoder service is a detached component, and may be called by a thread other than the main execution loop, it cannot (and should not) access the CPU state, and therefore must be instructed by the calling routine which ISA mode to use. Additionally, since a JIT compiler thread does not operate in sync with the execution thread, it also cannot access the CPU state and must call the decoder service with the ISA mode information supplied in the metadata of the basic-block it is currently compiling. A basic-block can only contain instructions of one ISA mode. This metadata is populated by the profiling element of the interpreter (see Figure 2). In order to remain retargetable (and therefore target hardware agnostic), the ISA mode is a first-class citizen in the DBT framework (see Figure 3), and is not tied to a specific architecture’s method of handling multiple ISAs. For example, the ARM architecture tracks the current ISA mode by means of the T bit in the CPSR register. ### B. Hotswapping Software Instruction Decoders The instruction decoder is implemented as a separate component, or service, within the DBT and as such is called by any routine that requires an instruction to be decoded. Such routines would be the interpreter, when a decoder cache miss occurs, and a JIT compilation thread, when an instruction is being translated. Upon such a request being made, the decoder must be provided with the PC from which to fetch the instruction, and the ISA that the instruction should be decoded with. Given this information, as part of a decoding request, the decoder service can then make a correctly sized fetch from the guest systems memory, and select the correct decoder tree with which to perform the decode of the instruction. The interpreter will perform the decode request using the current machine state, available as part of the execution engine, and a JIT compilation thread will perform the decode request using the snapshot of the machine state provided as part of the compilation work unit (see Figure 4). C. High-Level Retargetability We use a variant of the ARCHC [21] architecture description language (ADL) for the specification of the target architecture, i.e. architecturally visible registers, instruction formats and behaviours. A simplified example of our ARM V5T model is shown in Listing 1. Please note the declaration of the two supported ISAs in lines 18–19, where the system is made aware of the presence of the two target ISAs and the ARM ISA is set as a default. Within the constructor in lines 25–26 we include the detailed specifications for both supported ISA. After the top-level model (describing register banks, registers, flags and other architectural components) has been defined, details of both supported ISAs need to be specified. Simplified examples of the ARM and THUMB ISA models are shown in Listings 2 and 3 in Figure 5. For each ISA we need to provide its name (line 4) and fetch size (line 5) (of which instruction words are multiples of). This is followed by a specification of instruction formats present in the ARM and THUMB ISAs (lines 7–11) before each instruction is assigned exactly one of the previously defined instruction formats (lines 13–17). The main sections of the instruction definitions (starting in lines 21 and 20, respectively) describe the instruction patterns for decoder generation (lines 24 and 23), their assembly patterns for disassembly (lines 25 and 24) and names of functions that implement the actual instruction semantics, also called behaviours (lines 27 and 25). In an offline stage, we generate a target-specific processor module (see Figure 3) from this processor and ISA description. In particular, the individual decoder trees (see Figure 4) for both the ARM and THUMB ISAs are generated from an ARCHC-like specification using an approach based on [22], [23]. Note that we use ARCHC as a description language only, and do not use or implement any of the existing ARCHC tools. The benefit of choosing to use ARCHC as the description language is that it is well-known in the architecture design field, and descriptions exist for a variety of real architectures. We use the ARCHC as a description language only, and do not use or implement any of the existing ARCHC tools. The benefit of choosing to use ARCHC as the description language is that it is well-known in the architecture design field, and descriptions exist for a variety of real architectures. the profiling information (which includes a control-flow graph) encountered and after a certain configurable threshold is met, as the interpreter executes, written in a strict subset of C, the behaviours for each generator functions, which employ additional dynamic optimisations such as partial evaluation [24] to improve both quality and code size of the generated code. As the high-level implementation of the instructions are expressed as using high-level ARC-like specification (top) and low-level QEMU implementation (bottom). unnecessary runtime decoding checks (such as flag setting). The generated processor module is dynamically loaded by our system using the ULM 5T architecture (without LVM) and an LVM based JIT compiler. At runtime the JIT compiler performs translation of regions of target instructions [2] to native code of the host machine using the offline generated generator functions, which employ additional dynamic optimisations such as partial evaluation [24] to improve both quality and code size of the generated code. As the high-level implementation of the instructions are written in a strict subset of C, the behaviours for each instruction are used directly by the interpreter to execute each instruction as it is encountered. As the interpreter executes, it builds profiling information about the basic blocks it has encountered and after a certain configurable threshold is met, the profiling information (which includes a control-flow graph) \[ \text{Listing 4: ARCHC-like behaviour for an ARM V5 adc instruction} \] \[ \text{Listing 5: Equivalent QEMU code for an adc instruction} \] ![Fig. 6: Comparison: Semantic action of an ARM V5 adc instruction expressed as using high-level ARCHC-like specification (top) and low-level QEMU implementation (bottom).](image) \[ \text{TABLE I: DBT Host Configuration.} \] <table> <thead> <tr> <th>Vendor &amp; Model</th> <th>Setting</th> </tr> </thead> <tbody> <tr> <td>Dell PowerEdge R610</td> <td>Dell PowerEdge R610</td> </tr> <tr> <td>Processor Type</td> <td>2 × Intel® Xeon™ X5660</td> </tr> <tr> <td>Number of cores</td> <td>2 × 6</td> </tr> <tr> <td>Clock/FSB Frequency</td> <td>2.80/1.33 GHz</td> </tr> <tr> <td>L1-Cache</td> <td>2 × 6 × 32K Instruction/Data</td> </tr> <tr> <td>L2-Cache</td> <td>2 × 6 × 256K</td> </tr> <tr> <td>L3-Cache</td> <td>2 × 12 M</td> </tr> <tr> <td>Memory</td> <td>36 GB across 6 channels</td> </tr> <tr> <td>Operating System</td> <td>Linux version 2.6.32 (x86-64)</td> </tr> </tbody> </table> \[ \text{TABLE II: DBT System Configuration.} \] <table> <thead> <tr> <th>DBT Parameter</th> <th>Setting</th> </tr> </thead> <tbody> <tr> <td>Target architecture</td> <td>ARM V5T</td> </tr> <tr> <td>Host architecture</td> <td>x86-64</td> </tr> <tr> <td>Translation/Execution Model</td> <td>Asynch. Mixed-Mode</td> </tr> <tr> <td>Tracing Scheme</td> <td>Region-based [2]</td> </tr> <tr> <td>Tracing Interval</td> <td>30000 blocks</td> </tr> <tr> <td>JIT compiler</td> <td>LLVM 3.4</td> </tr> <tr> <td>No. of JIT Compilation Threads</td> <td>10</td> </tr> <tr> <td>JIT Optimisation</td> <td>-O3 &amp; Part. Eval. [24]</td> </tr> <tr> <td>Dynamic JIT Threshold</td> <td>Adaptive [2]</td> </tr> <tr> <td>System Calls</td> <td>Emulation</td> </tr> </tbody> </table> is sent as a compilation work unit to the work unit queue, where it is picked up by an idle compiler worker thread. The worker thread then processes the blocks within the work unit, and (utilising the generator functions) generates native code for the block (see Figure 3). IV. EXPERIMENTAL EVALUATION A. Experimental Setup and Methodology The target architecture for our DBT system is ARM V5T. We provide full coverage of both the standard ARM and compact THUMB ISAs. The host machine we have used for performance measurements is a 12-core x86 DELL™ PowerEdge™ as described in Table I. We have configured our DBT system according to the information provided in Table II. We have evaluated our retargetable DBT system using the SPEC CPU2006 integer benchmark. It is widely used and considered to be representative of a broad spectrum of application domains. We used it together with its reference data sets. The benchmarks have been compiled using the GCC 4.6.0 C/C++ cross-compilers, targeting the ARM V5T architecture (without hardware floating-point support) and enabling THUMB code generation with -O3 optimisation settings. We have measured the elapsed real time between invocation and termination of each benchmark in our DBT system using the UNIX time command. We used the average elapsed wall clock time across 10 runs for each benchmark and configuration in order to calculate execution rates (using MIPS in terms of target instructions) and speedups. For summary figures we report harmonic means, weighted by dynamic instruction count, to ensure the averages account for the different running times of benchmarks. For the comparison to the state-of-the-art we use the ARM port of QEMU 1.4.2 as a baseline. of overhead: operations required in T implementations. This overhead can be attributed to the extra introduced leads to a greater execution time for actual hardware whilst T our D more instructions are executed for dual-I in nearly every case the relative execution rate of a dual-I in accordance with industry practice. Figure 7 shows that exactly the same input for each test in both our D the number of target instructions does not change between target metric to measure the execution rate of both our D. Key Results We use MIPS (Millions of Instructions per-second) as a metric to measure the execution rate of both our DBT and QEMU, where the instruction execution rate is that of the benchmark system is consistently higher than that of single-ISA implementation. Whilst the actual running times are longer for dual-ISA binaries (due to the higher dynamic instruction count), the DBT throughput is greater and on average we achieve a 1.28x improvement in execution rate over single-ISA. The instruction counts in Table III, show that more instructions are executed for dual-ISA implementations, which leads to a longer running time. But, the throughput of our DBT (as measured in target MIPS) outperforms the single-ISA implementation. It has been shown (e.g. in [25]) that whilst THUMB compiled applications are typically physically smaller than when compiled for ARM, the amount of overhead introduced leads to a greater execution time for actual hardware implementations. This overhead can be attributed to the extra operations required in THUMB mode, to achieve the same effect in ARM mode. Specifically, there are two main sources of overhead: 1) Data processing instructions can only operate on the first eight registers (r0 to r7) - data must be explicitly moved from the high registers to the low registers. 2) No THUMB instructions (except for the conditional branch instruction) are predicated, and therefore local branches around conditional code must be made, in contrast to ARM where blocks of instructions can be simply marked-up with the appropriate predicate to exclude them from execution. The optimisation strategies employed in our DBT system remove a lot of this overhead, local branches (i.e. branches within a region) are heavily optimised using standard LLVM optimisation passes and high-register operations are negated through use of redundant-load and dead-store elimination. C. Dynamic ISA Switching Our results on dynamic ISA switching are summarised in Table III. For each benchmark we list the total number of ARM instructions, ARM/THUMB instructions, ISA switches and average dynamic instruction count between ISA switches. All benchmarks make use of both the ARM and THUMB ISAs. On average 8.76% of the total number of instructions are ARM, the rest THUMB instructions, but this figure varies significantly between benchmarks. 401.bzip2 and 429.mcf have similar ratios of THUMB instructions (both have approximately 99%) but quite different relative performance characteristics. 429.mcf executes 3% slower in dual-ISA mode, where 401.bzip2 executes 16% faster. This kind of variance indicates that our DBT supporting a dual-ISA does not necessarily introduce any overhead, but is simply a function of the behaviour of the binary being translated. D. Comparison to State-of-the-Art Figure 8 shows the absolute performance in target MIPS of our DBT compared with the state-of-the-art QEMU. The performance of our DBT system is consistently higher than that of QEMU, on average our DBT is 192% faster for dual-ISA implementations. Since the target instruction count is exactly the same between DBTs (per benchmark), this also indicates an improvement in DBT running time. We can attribute this to the ability of our JIT compiler to produce highly optimised native code, using aggressive LLVM optimisations that simply do not (and can not, given the trace-based architecture) exist in QEMU. We employ a region-based compilation strategy. enabling control-flow within a region to be subject to a series of loop optimisations. Our ability to hide compilation latency by means of offloading JIT compilation to multiple threads also provides a performance gain, as we are continuously executing target instructions, in contrast to QEMU which stalls as it discovers and compiles new code. The high-level code used to describe instruction implementations enables easy debugging, testing and verification, and we have internal tools that can automatically generate and run tests against reference hardware. In contrast, QEMU has a single large file that contains the decoder and the code generator, with limited documentation and no explanation of how instructions are decoded – or how their execution is modelled. Using our system, once the high-level code has been written, any improvements in the underlying framework (or even the processor module generator, see figure 3) are immediately available to all architecture descriptions, and if errors are detected in the decoder or instruction behaviours, it only requires correcting once in high-level code to fix in both the JIT and interpretive component. **E. Comparison to Native Execution** Figure 9 shows the absolute performance in target MIPS of our DBT compared with execution on a native ARM platform (QUALCOMM DRAGONBoard featuring four SnapDragon 800 cores). On average, we are 31% slower than native execution for dual-ISA implementation. But there are some cases where our simulation is actually faster than the native execution on a 1.7GHz out-of-order ARM core. For example, 429.mcf is 3.1x faster in our DBT, compared to executing natively. This may be attributed to 429.mcf warming up quite quickly in our JIT, and spending the remaining time executing host-optimised native code. Conversely, 403.gcc is 2.2x slower than native in our DBT, which may be attributed to 403.gcc’s inherently phased behaviour, and therefore invoking multiple JIT compilation sessions throughout the lifetime of the benchmark. **V. Related Work** **DAISY** [3] is an early software dynamic translator, which uses POWERPC as the input instruction set and a proprietary VLIW architecture as the target instruction set. It does not provide for dual-mode ISA support. SHADE [4] and EMBRA [5] are DBT systems targeting the Sparc v8/v9 and MIPS I ISAs, but neither system provides support for a dual-mode ISA. STRATA [6, 7] is a retargetable software dynamic translation infrastructure designed to support experimentation with novel applications of DBT. STRATA has been used for a variety of applications including system call monitoring, profiling, and code compression. The STRATA-ARM port [8] has introduced a number of ARM-specific optimisations, for example, involving reads of and writes to the exposed PC. STRATA-ARM targets the ARM v5T ISA, but provides no support for THUMB instructions. The popular SIMPLESCALAR simulator [9] has been ported to support the ARM v4 ISA, but this port is lacking support for THUMB. The SIMIT-ARM simulator can asynchronously perform dynamic binary translation (using GCC, as opposed to an in-built translator), and accomplish this by dispatching work to other processor cores, or across the network using sockets [1]. It does not, however, support the THUMB instruction set – nor does it intend to in the near future. XTREM [10] and XEMU [11] are a power and performance simulators for the Intel XSCALE core. Whilst this core implements the ARM v5TE ISA, THUMB instructions are neither supported by XTREM or XEMU. FACSIM [12] is an instruction set simulator targeting the ARM9E-S family of cores, which implement the ARM v5TE architecture. FACSIM employs DBT technology for instruction-accurate simulation and interpretive simulation in its cycle-accurate mode. Unfortunately, it does not support THUMB instructions in either mode. SYNTSIM [13] is a portable functional simulator generated from a high-level architectural description. It supports the ALPHA ISA, but provides no support for mixed-mode instruction sets. SIMICS/ARM [14] has a fairly complete implementation of the core ARM V5 instruction set. The THUMB and enhanced DSP extensions are not implemented, though. ARMiss [15] is an interpretive simulator of the ARM920T architecture, which uses instruction caching but provides no THUMB support. Similarly, the ARM port of the popular Pin tool does not support THUMB extensions [26]. As outlined above, none of the ARM DBTs mentioned support the THUMB instruction set, and others do not offer any form of multiple-ISA support specific to their target platform. This could indicate that the problem of supporting multiple instruction sets may have been deemed too complex to be worth implementing, or not yet even considered. QEMU [16] is a well-known retargetable emulator that supports ARM V5T platforms, including THUMB instructions. QEMU translates ARM/THUMB instructions to native x86 code using its tiny code generator (TCG). QEMU is interpreter-less, i.e., all executed code is translated. In particular, this means that TCG is not decoupled from the execution loop, but execution stops whilst code is JIT-compiled and only resumes afterwards. This design decision avoids the challenges outlined in this paper, but it places the JIT compiler on the critical path for code execution and misses the opportunity to offload the JIT compiler to another core of the host machine [27], [2], [28]. Another mixed-ISA simulator is presented in [29], however, this is based entirely on interpretive execution with instruction caching and about two orders of magnitude slower than either QEMU or our DBT system. ARM provides the ARMulator [30] and Fast Models [31]. ISS. ARMulator is an interpretive ISS and has been replaced by JIT compilation-based Fast Models, which supports THUMB and operates at speeds comparable to QEMU-ARM, but no internal details are available due to its proprietary nature. LISA is a hardware description language aimed at describing “programmable architectures, their peripherals and interfaces”. The project also produces a series of tools that accept a LISA definition and produce a toolchain consisting of compilers, assemblers, linkers, and an instruction set simulator. The simulator produced is termed a JIT-CCS (just-in-time cache compiled simulator) [32] and is a synchronous JIT-only simulator, which compiles and executes on an instruction-by-instruction basis, caching the results of the compilation for fast re-use. However, each instruction encountered is not in fact compiled as such, but rather linked to existing pre-compiled instruction behaviours as they are encountered. These links are placed in a cache, indexed by instruction address and are tagged with the instruction data. This arrangement supports self-modifying code and arbitrary ISA mode switches, as when a cache lookup occurs, the tag is checked to determine if the cached instruction is for the correct mode, and that it is equal to the one that is about to be executed. In contrast to our asynchronous approach, the simulator knows which ISA mode the emulated processor is currently in at instruction execution time and if a cache miss occurs, it can use the appropriate instruction decoder at that point to select the pre-compiled instruction implementation. As our decode and compilation phase is decoupled from the execution engine, we cannot use this method to select which decoder to use. The main drawback to this approach is that it is not strictly JIT-compilation, but rather JIT-selection of instruction implementations, and hence no kind of run-time optimisation is performed, especially since the simulation engine executes an instruction at a time. This is in contrast to our approach, which compiles an entire region of discovered guest instructions at a time, and executes within the compiled region of code. Furthermore, the instructions are only linked to behaviours, and so specialisation of the behaviours depending on static instruction fields cannot occur, resulting in greater overhead when executing an instruction. Our partial evaluation approach to instruction compilation removes this source of overhead entirely. A commercialisation of the LISA tools is available from Synopsys as their Processor Designer offering, but limited information about the implementation of ![Absolute Performance SPEC CPU2006](image-url) **Fig. 9:** Absolute performance (in target MIPS) of single- and mixed-mode execution in native execution and our retargetable DBT. the simulators produced is available for this proprietary tool, other than an indication that it employs the same strategy as described above. VI. SUMMARY AND CONCLUSIONS Asynchronous mixed-mode DBT systems provide an effective means to increase JIT throughput and, at the same time, hide compilation latency, enabling the use of potentially slower, yet highly optimising code generators. In this paper we have developed a novel methodology for integrating dual-ISA support to a retargetable, asynchronous DBT system: No prior asynchronous DBT system is known to provide any support for mixed-mode ISAs. We introduce ISA mode tracking and hot-swapping of software instruction decoders as key enablers to efficient ARM/THUMB emulation. We have evaluated our approach against the SPEC CPU2006 integer benchmark suite and demonstrate that our approach to dual-ISA support does not introduce any overhead. For an ARM VST model generated from a high-level description our retargetable DBT system operates at 780 MIPS on average. This is equivalent to about 192% of the performance of state-of-the-art QEMU-ARM, which has seen years of manual tuning to achieve its performance and is one of the very few DBT systems that provides both ARM and THUMB support. REFERENCES
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/19957879/main.pdf", "len_cl100k_base": 8543, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42188, "total-output-tokens": 11103, "length": "2e13", "weborganizer": {"__label__adult": 0.0006470680236816406, "__label__art_design": 0.0006508827209472656, "__label__crime_law": 0.0005750656127929688, "__label__education_jobs": 0.0005450248718261719, "__label__entertainment": 0.00013315677642822266, "__label__fashion_beauty": 0.0002903938293457031, "__label__finance_business": 0.0003712177276611328, "__label__food_dining": 0.000568389892578125, "__label__games": 0.0012807846069335938, "__label__hardware": 0.0182342529296875, "__label__health": 0.0007672309875488281, "__label__history": 0.0006084442138671875, "__label__home_hobbies": 0.00019943714141845703, "__label__industrial": 0.0016260147094726562, "__label__literature": 0.0002982616424560547, "__label__politics": 0.0004930496215820312, "__label__religion": 0.0010957717895507812, "__label__science_tech": 0.2191162109375, "__label__social_life": 7.319450378417969e-05, "__label__software": 0.00836181640625, "__label__software_dev": 0.7412109375, "__label__sports_fitness": 0.0005602836608886719, "__label__transportation": 0.0018281936645507812, "__label__travel": 0.0003554821014404297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47382, 0.03575]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47382, 0.37969]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47382, 0.89581]], "google_gemma-3-12b-it_contains_pii": [[0, 1374, false], [1374, 7320, null], [7320, 11899, null], [11899, 16815, null], [16815, 20150, null], [20150, 22573, null], [22573, 27032, null], [27032, 31011, null], [31011, 35013, null], [35013, 39544, null], [39544, 47382, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1374, true], [1374, 7320, null], [7320, 11899, null], [11899, 16815, null], [16815, 20150, null], [20150, 22573, null], [22573, 27032, null], [27032, 31011, null], [31011, 35013, null], [35013, 39544, null], [39544, 47382, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47382, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47382, null]], "pdf_page_numbers": [[0, 1374, 1], [1374, 7320, 2], [7320, 11899, 3], [11899, 16815, 4], [16815, 20150, 5], [20150, 22573, 6], [22573, 27032, 7], [27032, 31011, 8], [31011, 35013, 9], [35013, 39544, 10], [39544, 47382, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47382, 0.14839]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
5c01b3ac2ab82240e1468390137b00127863cc98
Abstraction-driven SAT-based Analysis of Security Protocols* Alessandro Armando and Luca Compagna DIST – Università degli Studi di Genova, Viale Causa 13 – 16145 Genova, Italy, {armando,compa}@dist.unige.it Abstract. In previous work we proposed an approach to the automatic translation of protocol insecurity problems into propositional logic with the ultimate goal of building an automatic model-checker for security protocols based on state-of-the-art SAT solvers. In this paper we present an improved procedure based on an abstraction/refinement strategy which, by interleaving the encoding and solving phases, leads to a significant improvement of the overall performance of our model-checker. 1 Introduction In spite of their apparent simplicity, security protocols are notoriously error-prone. Many published protocols have been implemented and deployed in real applications only to be found flawed years later. For instance, the Needham-Schroeder authentication protocol [28] was found vulnerable to a serious attack 17 years after its publication [22]. The problem is that security protocols are intended to be used in open, hostile environments (such as, e.g., the Internet) and therefore steps carried out by honest participants interplay with the ability of malicious agents to eavesdrop, divert, and even forge new messages. This results in a combinatorially high number of actual protocol runs whose analysis stretches traditional validation techniques (e.g. testing). This problem has stimulated the development of formal method techniques for this novel application domain either by adapting existing techniques (such as, e.g., in [16, 29, 30]) or by devising new ones (such as, e.g., in [5, 7, 24, 12, 14, 20, 25]). In [3] we proposed an approach to the automatic translation of protocol insecurity problems into propositional logic with the ultimate goal of building a model-checker for security protocols based on state-of-the-art SAT solvers. The approach combines a reduction of protocol insecurity problems to planning problems with well-known SAT-reduction techniques developed for planning. We have built SATMC, a model-checker for security protocols based on these ideas, which readily finds attacks on a set of well-known authentication protocols. Motivated by the observation that the time spent in generating the propositional formulae largely dominates the time spent by the SAT solver, in this paper * We wish to thank Enrico Giachiglia, Marco Maratela, Massimo Narizzano, and Armando Tacchella for many useful and stimulating discussions and their support in the use of the SIM solver. we introduce an abstraction/refinement strategy that significantly improves the encoding time and hence the overall performance of the model-checker. We show that the performance of SATMC depends not only on the speed of the SAT solver but also on the ability of the SAT solver to return partial models. Structure of the paper. In Section 2 we introduce the notion of protocol insecurity problem and show its relationship with the notion of planning problem. In Section 3 we introduce techniques for encoding bounded planning problems into SAT. In Section 4 we present our improved model-checking procedure based on abstraction and refinement. In Section 5 we present experimental results obtained with our model-checker which confirm the better performance of the improved procedure. In Section 6 we discuss the related work and finally, in Section 7, we draw some conclusions and outline the future work. 2 Protocol Insecurity Problems as Planning Problems To illustrate, consider the following one-way authentication protocol: \[(1) \ A \rightarrow B : \{N\}_K \] \[(2) \ B \rightarrow A : \{f(N)\}_K \] where \(N\) is a nonce\(^1\) generated by Alice, \(K\) is a symmetric key, \(f\) is a function known to Alice and Bob, and \(\{X\}_K\) denotes the result of encrypting text \(X\) with key \(K\). Successful execution of the protocol should convince Alice that she has been talking with Bob, since only Bob could have formed the appropriate response to the message issued in (1). In fact, Ivory can deceive Alice into believing that she is talking with Bob whereas she is talking with her. This is achieved by executing concurrently two sessions of the protocol and using messages from one session to form messages in the other as illustrated by the following protocol trace: \[(1.1) \ A \rightarrow I(B) : \{N\}_K \] \[(2.1) \ I(B) \rightarrow A : \{N\}_K \] \[(2.2) \ A \rightarrow I(B) : \{f(N)\}_K \] \[(1.2) \ I(B) \rightarrow A : \{f(N)\}_K \] Alice starts the protocol with message (1.1). Ivory intercepts the message and (pretending to be Bob) starts a second session with Alice by replaying the received message—cf. step (2.1). Alice replies to this message with message (2.2). But this is exactly the message Alice is waiting to receive in the first protocol session. This allows Ivory to finish the first session by using it—cf. (1.2). At the end of the above steps Alice believes she has been talking with Bob, but this is obviously not the case. A problem with the rule-based notation used above to specify security protocols is that it leaves implicit many important details such as the shared information and how the principals should react to messages of an unexpected form. \(^1\) Nonces are numbers generated by principals that are intended to be used only once. This kind of description is usually supplemented with explanations in natural language which in our case explain that \( N \) is a nonce generated by Alice, that \( f \) is a function known to the honest participants only, and that \( K \) is a shared key. To cope with the above difficulties and pave the way to the formal analysis of protocols a set of models and specification formalisms have been put forward [8, 23]. In this paper we use a planning language à la STRIPS to this end. A planning problem is a tuple \( \Pi = (\mathcal{F}, \mathcal{A}, \mathit{Ops}, \mathcal{I}, \mathcal{G}) \), where \( \mathcal{F} \) and \( \mathcal{A} \) are disjoint sets of variable-free atomic formulae of a sorted first-order language called fluents and actions respectively; \( \mathit{Ops} \) is a set of expressions of the form \( \text{Pre} \stackrel{\mathcal{A}c}{\longleftarrow} \text{Add} \land \text{Del} \) where \( \mathcal{A}c \in \mathcal{A} \) and \( \text{Pre} \), \( \text{Add} \), and \( \text{Del} \) are finite sets of fluents such that \( \text{Add} \cap \text{Del} = \emptyset \); \( \mathcal{I} \) is a set of fluents representing the initial state and \( \mathcal{G} \) is a boolean combination of fluents representing the final states. A state is represented by a set \( S \) of fluents, meaning that all the fluents in \( S \) hold in the state, while all the fluents in \( \mathcal{F} \setminus S \) do not hold in the state (close-world-assignment). An action is applicable in a state \( S \) iff the action preconditions (fluents in \( \text{Pre} \)) occur in \( S \) and the application of the action leads to a new state obtained from \( S \) by removing the fluents in \( \text{Del} \) and adding those in \( \text{Add} \). A solution to a planning problem \( \Pi \), called plan, is a sequence of actions whose execution leads from the initial state to a final state and the preconditions of each action appears in the state to which it applies. Plans can be represented in a flexible way by means of partial-order plans. A partial-order plan is a pair \( \langle \Lambda, \prec \rangle \) where \( \Lambda \subseteq \mathcal{A} \times \mathbb{N} \) and \( \prec \) is a partial order on the set of natural numbers \( \mathbb{N} \). A plan \( P \) is in the set of the plans denoted by a partial-order plan \( \langle \Lambda, \prec \rangle \) iff there exists a bijection \( \pi : \Lambda \to \{1, \ldots, |\Lambda|\} \) such that if \( i_1 \prec i_2 \) then \( \pi(\alpha_{i_1}, i_1) < \pi(\alpha_{i_2}, i_2) \); if \( \pi(\alpha, i) = k \) then \( \alpha \) is the \( k \)-th action in \( P \). The length of the partial-order plan \( \langle \Lambda, \prec \rangle \) is the cardinality of the set \( \{i \mid \exists \alpha \in \mathcal{A} s.t. \langle \alpha, i \rangle \in \Lambda\} \). For instance, the partial-order plan \( \langle\{(a,0),(b,0),(c,3),(d,5),(a,5)\},\{0 \prec 3,0 \prec 5,3 \prec 5\}\rangle \) has length 3 and represents the set of plans \( \{\langle a,b,c,d,a\rangle,\langle b,a,c,d,a\rangle,\langle a,b,c,a,d\rangle,\langle b,a,c,a,d\rangle\} \). A protocol insecurity problem is given by a transition relation specifying the run allowed by the protocol plus an initial state and a set of bad states (i.e. states whose reachability implies the violation of the desired security properties). Protocol insecurity problems can be easily recast as planning problems. The states of the transition system model the state of the honest principals, the knowledge of the intruder, as well as the messages sent over the channel but not yet processed by the intended recipient (or diverted by the intruder). Actions model the legal transitions that can be performed by honest participants as well as the abilities of the intruder. For the simple protocol above, fluents are of the form: - \( i(t) \), meaning that the intruder knows the term \( t \); - \( \text{fresh}(n) \), meaning that the term (usually a constant representing a nonce) \( n \) has not been used yet; - \( m(j,s,r,t) \), meaning that a message \( t \) has been sent (supposedly) from principal \( s \) to principal \( r \) at protocol step \( j \), and \( w(j, s, r, [t_1, \ldots, t_k]) \), representing the state of execution of principal \( r \) at step \( j \); it means that \( r \) knows the terms \( t_1, \ldots, t_k \) at step \( j \), and (if \( j = 1, 2 \)) that \( r \) is waiting for a message from \( s \) for step \( j \) to be executed. The initial state of the system is\(^2\) \[ \begin{align*} w(0, a, a, [\,]) \cdot w(1, a, b, [\,]) \cdot w(0, b, b, [\,]) \cdot w(1, b, a, [\,]) \\ \cdot \text{fresh}(n_1) \cdot \text{fresh}(n_2) \cdot i(a) \cdot i(b) \end{align*} \] Fluents \( w(0, a, a, [\,]) \), \( w(1, a, b, [\,]) \), \( w(0, b, b, [\,]) \), and \( w(1, b, a, [\,]) \) state that principals \( a \) and \( b \) are ready to play both the role of the initiator and of the responder. Fluents \( \text{fresh}(n_1) \) and \( \text{fresh}(n_2) \) state that \( n_1 \) and \( n_2 \) are fresh numbers (i.e. they have never been used before) and then that they can be used as nonces. Finally \( i(a) \) and \( i(b) \) state that the identities of \( a \) and \( b \) are known to the intruder. The behavior of the honest principals and of the intruder is specified by means of operators. The activity of sending the first message is modeled by:\(^3\) \[ \begin{align*} w(0, A, A, [\,]) \cdot \text{fresh}(N) \xrightarrow{\text{step}_1(A, B, N)} m(1, A, B, \{N\}_k) \cdot w(2, B, A, [f(N)]) \\ \cdot w(0, A, A, [\,]) \cdot \text{fresh}(N) \end{align*} \] Intuitively, this operator says that, if in the current global state there is an initiator \( A \) ready to start the protocol with somebody, and a fresh term \( N \) is available, then \( A \) can send the first message of the protocol to a responder \( B \). By doing this, \( A \) will update its state and, of course, \( N \) will not be usable as fresh term anymore. Notice that nonce \( f(N) \) is added to the acquired knowledge of \( A \) for subsequent use. The receipt of the message and the reply of the responder is modeled by: \[ \begin{align*} m(1, A, B, \{N\}_k) \cdot w(1, A, B, [\,]) \xrightarrow{\text{step}_1(A, B, N)} m(2, B, A, \{f(N)\}_k) \cdot w(3, B, B, [\,]) \\ \cdot m(1, A, B, \{N\}_k) \cdot w(1, A, B, [\,]) \end{align*} \] The final step of the protocol is modeled by: \[ \begin{align*} m(2, B, A, \{f(N)\}_k) \cdot w(2, B, A, [f(N)]) \xrightarrow{\text{step}_2(A, B, N)} w(4, A, A, [\,]) \\ \cdot m(2, B, A, \{f(N)\}_k) \cdot w(2, B, A, [f(N)]) \end{align*} \] where steps 3 and 4 occurring as first parameter in \( w \)-fluents are used to denote the final state of the responder and of the initiator, respectively. The following rule models the ability of the intruder of diverting the information exchanged by the honest participants: \[ m(J, S, R, T) \xrightarrow{\text{divert}(J, R, S, T)} i(R) \cdot i(S) \cdot i(T) \cdot m(J, S, R, T) \] \(^2\) To improve readability we use the \( \") operator as set constructor. For instance, we write \( \{ x, y, z \} \) to denote the set \( \{ x, y, z \} \). \(^3\) Here and in sequel we use capital letters to denote variables. The ability of encrypting and decrypting messages is modeled by: \[ i(T) \cdot i(K) \xrightarrow{\text{encrypt}(K,T)} i(\{T\}_K) ; \] (1) \[ i(\{T\}_K) \cdot i(K) \xrightarrow{\text{decrypt}(K,T)} i(T) ; \] (2) Finally, the intruder can send arbitrary messages possibly faking somebody-else’s identity in doing so: \[ i(T) \cdot i(S) \cdot i(R) \xrightarrow{\text{fake}(J,R,S,T)} m(J, S, R, T) ; \] In particular, this operator states that, if the intruder knows a term \(T\) and two agent identities \(S\) and \(R\), then it can send on the network channel the message \(T\) to \(R\) impersonating the identity of \(S\). Notice that with the above rules we represent the most general intruder based on the Dolev-Yao model [15]. In this model the intruder has the ability of eavesdropping and diverting messages as well as that of composing, decomposing, encrypting, and decrypting messages (provided the encryption keys are known).\(^4\) Finally, he can send those messages to other participants with a false identity. A security protocol is intended to enjoy a specific security property. In our example this property is the ability of authenticating Bob to Alice. A security property can be specified by providing a set of “bad” states, i.e. states whose reachability implies a violation of the property. For instance, any state containing a subset of fluents of the form \(w(4, A, A, []) \cdot w(1, A, B, [])\) (i.e. A has finished a run of the protocol as initiator and B is still at the beginning of the protocol run as responder) witnesses a violation of the expected authentication property and therefore it should be considered as a bad state. It is easy to build a propositional formula \(\mathcal{G}\) such that each model of \(\mathcal{G}\) represents a bad state. For the above example \(\mathcal{G} \equiv (w(4, a, a, []) \land w(1, a, b, [])) \lor (w(4, b, b, []) \land w(1, b, a, []))\). 3 Automatic SAT-Compilation of Planning Problems Let \(\Pi = (\mathcal{F}, \mathcal{A}, \text{Ops}, I, \mathcal{G})\) be a planning problem with finite \(\mathcal{F}\) and \(\mathcal{A}\) and let \(n\) be a positive integer, then it is possible to build a propositional formula \(\Phi_n\) such that \(\Phi_n\) corresponds to a partial-order plan of length \(n\) representing solutions of \(\Pi\). The encoding of a planning problem into a set of SAT formulae can be done in a variety of ways (see [21,17] for a survey). The basic idea is to add an additional time-index to actions and fluents to indicate the state at which the action begins or the fluent holds. Fluents are thus indexed by 0 through \(n\) and actions by 0 through \(n - 1\). If \(p\) is a fluent or an action and \(i\) is an index in the appropriate range, then \(i:p\) is the corresponding time-indexed propositional variable. \(^4\) In other words we do the perfect encryption assumption. It turns out that many protocols are flawed even under this strong assumption. It is worth pointing out that the reduction of planning problems to SAT paves the way to an automatic SAT-compilation of protocol insecurity problems. However a direct application of the approach is not viable, because the resulting encodings are of unmanageable size even for simple protocols. To overcome this difficulty in [3] we put forward a number of optimizing transformations on protocols insecurity problems that make the approach both feasible and effective on many protocols of interest. For instance, certain rules can be merged together without losing any possible attack on the protocol thereby decreasing the number of actions and hence the size of the resulting SAT encodings. A detailed description of the optimizing transformations goes beyond the scope of this paper and the interested reader should consult [3]. In the rest of this section we will formally describe the standard linear encoding and a combination of the previous one with the bitwise representation of actions. ### 3.1 Standard Linear Encoding By using the standard linear encoding, $\Phi^n_H$ is the smallest set (intended conjunctively) such that: - **Initial State Axioms**: for all fluents $f$, if $p \in I$ then $0:p \in \Phi^n_H$; - $\neg 0:p \in \Phi^n_H$; - **Goal State Axioms**: $G_n \in \Phi^n_H$, where $G_n$ is the formula $G$ in which each fluent occurring in it is indexed by $n$; - **Universal Axioms**: for each $(Pre \xrightarrow{\alpha} Add; Del) \in Ops$ and $i = 0, \ldots, n-1$: $$(i: \alpha \supset \{(i+1): f | f \in Pre\}) \in \Phi^n_H$$ $$(i: \alpha \supset \{(i+1): f | f \in Add\}) \in \Phi^n_H$$ $$(i: \alpha \supset \{(i+1): f | f \in Del\}) \in \Phi^n_H$$ - **Explanatory Frame Axioms**: for all fluents $f$ and $i = 0, \ldots, n-1$: $$(i: f \land \neg (i+1): f) \supset \bigvee \{i: \alpha | (Pre \xrightarrow{\alpha} Add; Del) \in Ops, f \in Del\} \in \Phi^n_H$$ $$(\neg i: f \land (i+1): f) \supset \bigvee \{i: \alpha | (Pre \xrightarrow{\alpha} Add; Del) \in Ops, f \in Add\} \in \Phi^n_H$$ - **Conflict Exclusion Axioms (CEA)**: for $i = 0, \ldots, n-1$: $$\neg (i: \alpha_1 \land i: \alpha_2) \in \Phi^n_H$$ for all $\alpha_1 \neq \alpha_2$ such that $(Pre_1 \xrightarrow{\alpha_1} Add_1; Del_1) \in Ops, (Pre_2 \xrightarrow{\alpha_2} Add_2; Del_2) \in Ops$, and $Pre_1 \cap Del_1 \neq \emptyset$ or $Pre_2 \cap Del_2 \neq \emptyset$. It is immediate to see that the number of literals in $\Phi^n_H$ is in $O(n|F|+n|A|)$. Moreover the number of clauses generated by the Universal Axioms is in $O(nP_0|A|)$ where $P_0$ is the maximal number of fluents mentioned in a list of an operator (usually a small number); the number of clauses generated by the Explanatory Frame Axioms is in $O(n|F|)$; finally, the number of clauses generated by the CEA is in $O(n|A|^2)$. For instance, the application of the above encoding to the EKE protocol (with $n = 5$) generates a propositional formula with 61,508 atoms and 13,948,534 clauses of whom 13,165,050 (about 94%) are due to the CEA. 3.2 Linear Encoding combined with Bitwise Action Representation Since Conflict Exclusion Axioms (CEA) grow quadratically in the number of actions, the application of the standard linear encoding to asynchronous concurrent systems can generate huge propositional formulae. The CEA are useful to restrict which actions may occur simultaneously and therefore to guarantee that any model of the propositional formula would correspond to a partial-order plan representing solutions (plans) of the original planning problem. The bitwise action representation does not require CEA. In order to represent univocally each action in $\mathcal{A} = \{\alpha^1, \ldots, \alpha^{|\mathcal{A}|}\}$ $k = \lceil \log_2 |\mathcal{A}| \rceil$ propositional variables ($b^1, \ldots, b^k$) are used. Let $||\alpha^j||$ the propositional formula whose model identifies the bitwise representation of the action $\alpha^j$, then the combination of the standard linear encoding with the Bitwise Action Representation (BAR) results in the propositional formula $\Phi^\ell$ defined as in 3.1 where the conflict exclusion axioms are neglected and each occurrence of $i: \alpha$ is replaced by $i: ||\alpha||$. The number of literals in $\Phi^\ell_n$ decreases, wrt the standard linear encoding, to $O(n|\mathcal{F}| + nk)$. However, while the number of clauses generated by the Universal Axioms does not change and the conflict exclusion axioms are not required, the number of clauses generated by the Explanatory Frame Axioms combined with the bitwise action representation becomes, in the worst case, exponential in the number of actions. Precisely, it is in $O(nk|\mathcal{A}| |\mathcal{F}|)$. To avoid this exponential growth it is sufficient to restore in the Explanatory Frame Axioms the propositional variables associated with the actions in place of their bitwise representation and to extend the formula as follow: - **Bitwise Axioms:** for each $\langle \text{Pre} \overset{\Delta}{\rightarrow} \text{Add} ; \text{Del} \rangle \in \text{Ops}$ and $i = 0, \ldots, n - 1$: $$i: ||\alpha|| \equiv i: \alpha \in \Phi^\ell_n$$ With respect to the standard linear encoding, the number of literals in $\Phi^\ell_n$ increases to $O(n|\mathcal{F}| + n|\mathcal{A}| + nk)$, and the number of clauses in $\Phi^\ell_n$ is neither quadratic nor exponential in the number of actions. In fact the number of clauses generated by the Bitwise Axioms is in $O(n|\mathcal{A}| + nk|\mathcal{A}|)$. For instance, the application of this encoding to the EKE protocol with $n = 5$ generates a propositional formula with 61,578 atoms (among these 70 atoms are used to represent the actions) and 1,630,044 clauses (among these 846,560, i.e. about 52%, are results from the bitwise axioms). From here on we call bitwise encoding this “smart” variant of the standard linear encoding combined with the bitwise action representation. An important difference between the standard linear encoding and the bitwise encoding is that the former allows for some actions to be executed in parallel (at the same step), while the latter imposes that one (and only one) action can be executed at each step. Hence solutions found at step $\pi$ by applying the standard linear encoding will be found at step $n \geq \pi$ if the bitwise encoding is applied. --- 5 Notice that $i: ||\alpha||$ is the formula $||\alpha||$ in which each propositional variable occurring in it, is indexed by $i$. 4 The Abstraction/Refinement Strategy Abstraction [11] is a powerful technique for the analysis of large-state systems. The idea is to abstract the system under consideration into simpler one that has all the behaviors of the original one, but may also exhibit some spurious behaviors. Thus—by construction—if a safety property holds for the abstract system, it will also hold for the concrete one. On the other hand, a counterexample for the abstract system does not always correspond to a counterexample for the concrete one. If the behavior that violates the safety property in the abstract system cannot be reproduced in the concrete system, the counterexample is said to be spurious. When such a counterexample is found, the abstraction must be refined in order to eliminate the spurious behavior. The procedure is then iterated until either a non-spurious counterexample is found, or the abstract system satisfies the safety property. More precisely, let $P$ be a set of propositional letters, we define a labeled transition system (LTS) to be a tuple $<$ $\Sigma, \mathcal{T}, \mathcal{L}, \mathcal{R}, \nu$ $>$, where $\Sigma$ is a set of states, $\mathcal{T} \subseteq \Sigma$ is the set of initial states, $\mathcal{L}$ is the set of transition labels, $\mathcal{R} \subseteq \{s_0 \xrightarrow{t} s_1 | s_0, s_1 \in \Sigma; t \in \mathcal{L}\}$ is the labeled transition relation, and $\nu$ is a total function mapping states into $2^P$. A computation path of length $k$ of a LTS is a sequence $s_0 \xrightarrow{t_1} s_1 \ldots \xrightarrow{t_k} s_{k+1}$ such that $k \geq 0, s_0 \in \mathcal{T}$, and $s_i \xrightarrow{t_{i+1}} s_{i+1} \in \mathcal{R}$, for each $i = 0, \ldots, k$. Let $M = \langle \Sigma, \mathcal{T}, \mathcal{L}, \mathcal{R}, \nu \rangle$ and $M^a = \langle \Sigma^a, \mathcal{T}^a, \mathcal{L}^a, \mathcal{R}^a, \nu^a \rangle$ be two LTSs and let $h : \Sigma \rightarrow \Sigma^a$ be a total (abstraction) function, then $M^a$ is an abstraction of $M$ w.r.t. $h$, iff the following conditions hold: - for every state $s$ in $\mathcal{T}$ there exists a state $s^a$ in $\mathcal{T}^a$ such that $h(s) = s^a$; - for all states $s_0, s_1 \in \Sigma$ and $s_0^a \in \Sigma^a$ with $h(s_0) = s_0^a$, if $s_0 \xrightarrow{t} s_1 \in \mathcal{R}$ then there exists a state $s_1^a \in \Sigma^a$ such that $s_0^a \xrightarrow{t} s_1^a \in \mathcal{R}^a$ and $h(s_1) = s_1^a$. If $M^a$ is an abstraction of $M$, then it is simple to prove that for every computation path $s_0 \xrightarrow{t_1} \ldots \xrightarrow{t_{k+1}} s_{k+1}$ of $M$ there exists a computation path $s_0^a \xrightarrow{t^a_1} \ldots \xrightarrow{t^a_{k+1}} s_{k+1}^a$ of $M^a$ such that $h(s_i) = s_i^a$, for each $i = 0, \ldots, k+1$. As a consequence, the following fact (called preservation theorem) holds: given a propositional formula $\varphi$ respecting the abstraction function $h$, if $M^a \models AG \varphi$ then $M \models AG \varphi$. Since in the standard linear encoding, the number of CEA is the main source of difficulty it is tempting to avoid their generation altogether. It turns out that the resulting problem is an abstraction of the original problem. To show this, we must define the model-checking problem associated with a planning problem. Let $\Pi = \langle F, A, Ops, I, G \rangle$ be a planning problem, then the model-checking problem associated with $\Pi$ is $M \models AG \neg G$ where $AG \neg G$ is the safety property to be checked and $M = \langle \Sigma, \mathcal{T}, \mathcal{L}, \mathcal{R}, \nu \rangle$ is the concrete LTS such that $\nu : \Sigma \rightarrow 2^P$, where --- 6 We restrict our attention to LTS with a total transition relation. 7 Following [11] we say that a propositional formula $\varphi$ respects an abstraction function $h$ if for each $s \in \Sigma$, $\nu(h(s)) \models \varphi$ implies $v(s) \models \varphi$. 8 Note that $A$ and $G$ represent the “for all computation” path quantifier and the “globally” temporal operator, respectively. 9 Notice that, given a set $S$ with $2^S$ we indicate the powerset of $S$. \[ I = \{ s \mid s \in \Sigma, v(s) = I \}, \mathcal{L} \subseteq 2^\mathcal{A} \text{ and } \mathcal{R} \text{ is the set of labeled transitions } s_0 \{ \alpha_1, \ldots, \alpha_m \} s_1 \text{ with } m \geq 0 \text{ and such that the following conditions hold:}^{10} \] 1. for each \( i = 1, \ldots, m \), \( (\text{Pre}_i \xrightarrow{\alpha_i} \text{Add}_i ; \text{Del}_i) \in \text{Ops} \); 2. \( \bigcup_{i=1}^{m} \text{Pre}_i \subseteq v(s_0) \); 3. \( v(s_1) = (v(s_0) \setminus \bigcup_{i=1}^{m} \text{Del}_i) \cup (\bigcup_{i=1}^{m} \text{Add}_i) \); 4. for each \( \alpha_i, \alpha_j \in \{ \alpha_1, \ldots, \alpha_m \} \) such that \( i \neq j \), then \( \text{Pre}_i \cap \text{Del}_j = \emptyset \). Intuitively, from a state \( s_0 \) it is possible to reach a state \( s_1 \) iff there exists a set of actions such that their preconditions hold in \( s_0 \) and for each pair of actions in that set, the conflict exclusion constraint is satisfied (i.e. the precondition list of the first action does not intersect with the delete list of the second and vice versa). By abstracting away the CEA we obtain the abstract LTS \( M^a = (\Sigma, \mathcal{I}, \mathcal{L}^a, \mathcal{R}^a, v^a) \) whose main difference from the concrete system \( M \) lies in the labeled transition relation. In particular, since in the abstract system conflict exclusion is not enforced, \( \mathcal{R}^a \) must satisfy only conditions (1), (2), and (3). Hence, all the computation paths allowed for the concrete system are also allowed in the abstract system (that is \( \mathcal{R} \subseteq \mathcal{R}^a \)) and the preservation theorem holds. On the other hand, if a counterexample \( s_0 \xrightarrow{A_1} \ldots \xrightarrow{A_{k+1}} s_{k+1} \) (\( s_0 \in \mathcal{I}, A_i \) is a set of actions for \( i = 1, \ldots, k+1 \), and \( s_{k+1} \) is a bad state i.e. it violates \( \neg G \)) is found in the abstract system, then, in order to not be spurious, it has to be validated in the concrete system. The validation procedure checks that each step of the counterexample does not involve conflicting actions i.e. it checks condition (4) over the sets \( A_i \) (\( i = 1, \ldots, k+1 \)). When the validation procedure fails, the abstract LTS must be refined by adding to its transition relation the conflict exclusion constraints thereby avoiding the conflicting actions discovered so far in spurious counterexamples. The whole procedure is repeated until either a counterexample without conflicting actions is found, or the abstract system satisfies the safety property. 5 A SAT-based Model-Checker for Security Protocols We have implemented the above ideas in SATMC, a SAT-based model-checker for security protocol analysis. Given a protocol insecurity problem \( \Xi \), SATMC starts by applying the aforementioned optimizing transformations to \( \Xi \) thereby obtaining a new protocol insecurity problem \( \Xi' \) which is then translated into a corresponding planning problem \( \Pi_{\Xi'} \). \( \Pi_{\Xi'} \) is in turn compiled into SAT using one of the methodologies outlined in Section 3 for increasing values of \( n \) and the propositional formula generated at each step is fed to a state-of-the-art SAT solver (currently Chaff [27], SIM [18], and SATO [31] are currently supported). As soon as a satisfiable formula is found, the corresponding model is translated back into a partial order plan\(^{11} \) which is reported to the user. \(^{10}\) If \( m = 0 \) then no action is applied and therefore \( s_1 = s_0 \), i.e. stuttering. \(^{11}\) The partial order plan represents attacks on the protocol. We have run our tool against a selection of (flawed) security protocols drawn from the Clark/Jacobi library [10]. For each protocol we have built a corresponding protocol insecurity problem modeling a scenario with a bounded number of sessions in which the involved principals exchange messages on a channel controlled by the most general intruder based on the Dolev-Yao model. Moreover, we assume perfect cryptography (see Section 2) and that all atoms are typed i.e. we exclude type confusion (strong typing assumption). Note that since the number of sessions is bounded and strong typing is assumed, then an intruder with finite knowledge is sufficient to find all the possible attacks that can occur in this scenario. The results of our experiments are reported in Table 1. Each protocol has been analyzed by applying one of the following encoding techniques: **CEA**: standard linear encoding with generation of the CEA enabled, **BITWISE**: bitwise encoding, and **NoCEA**: standard linear encoding with generation of the CEA disabled and therefore combined with the Abstraction Refinement Loop. For each protocol we give the number of fluents (F) and of actions (Acts) in the correspondent planning problem, and for each encoding technique we give the smallest value of n at which the attack is found (N), the number of propositional variables (A) and clauses (CL) in the SAT formula (in thousands), the time spent to generate the SAT formula (EncT), the time spent by SAT solver to solve the last SAT formula (Last), and the total time spent by the SAT solver to solve all the SAT formulae generated for that protocol (Tot). In the context of NoCEA a comparison between Chaff and SIM has been performed and the solving times are shown together with the number of iterations of the Abstraction Refinement Loop (#). When the CEA encoding technique is applied, encoding time largely dominates solving time and this is strictly related to the size of the SAT instances generated. Both the size and the time spent to generate the SAT formulae drop significantly if BITWISE is used. This enabled us to encode and find attacks on protocols (e.g. Andrew, KaoChow 2, KaoChow 3, and Woo-Lam M) that could not be solved by using the CEA encoding technique. However, finding an attack using BITWISE can require more iterative-deepening steps (cf. the N columns) than those required by applying the standard linear encoding. As a consequence on some instances, such as the NSCK and the SPLICE protocols, CEA performance better than BITWISE both in terms of the size the SAT formulae and in encoding time. Moreover, the addition of the bitwise axioms can significantly increase the solving time (see, e.g., KaoChow 2 and KaoChow 3 protocols). --- 12 As pointed out in [19] type-flaw attacks can be prevented by tagging the fields of a message with information indicating its intended type. Therefore the strong typing assumption is allows us to restrict the search space and focus on the most interesting attacks. 13 Experiments have been carried out on a PC with a 1.4 GHz CPU and 1 GB of RAM. 14 Times are measured in seconds. ### Table 1. Experiments on protocols from the Clark/Jacob library <table> <thead> <tr> <th>Protocol</th> <th>F</th> <th>Acts</th> <th>CEA</th> <th>BITWISE</th> <th>NoCEA</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>N</td> <td>A</td> <td>CL</td> </tr> <tr> <td></td> <td></td> <td></td> <td>A</td> <td>(K)</td> <td>(K)</td> </tr> <tr> <td>Andrew</td> <td>463</td> <td>15,650</td> <td>9</td> <td>145</td> <td>-</td> </tr> <tr> <td>EKE</td> <td>1,148</td> <td>10,924</td> <td>5</td> <td>62</td> <td>13,949</td> </tr> <tr> <td>ISO.CCF-1 U</td> <td>39</td> <td>22</td> <td>4</td> <td>&lt;1</td> <td>&lt;1</td> </tr> <tr> <td>ISO CCF 2 M</td> <td>81</td> <td>132</td> <td>4</td> <td>&lt;1</td> <td>6</td> </tr> <tr> <td>ISO-PK-1 U</td> <td>61</td> <td>49</td> <td>4</td> <td>&lt;1</td> <td>2</td> </tr> <tr> <td>ISO-PK-2 M</td> <td>125</td> <td>279</td> <td>4</td> <td>2</td> <td>17</td> </tr> <tr> <td>ISO-SK-1 U</td> <td>39</td> <td>25</td> <td>4</td> <td>&lt;1</td> <td>&lt;1</td> </tr> <tr> <td>ISO-SK-2 M</td> <td>100</td> <td>87</td> <td>4</td> <td>&lt;1</td> <td>12</td> </tr> <tr> <td>KaoChow 1</td> <td>2,753</td> <td>2,042</td> <td>7</td> <td>36</td> <td>355</td> </tr> <tr> <td>KaoChow 2</td> <td>32,963</td> <td>22,344</td> <td>9</td> <td>531</td> <td>35,178</td> </tr> <tr> <td>KaoChow 3</td> <td>49,378</td> <td>55,727</td> <td>9</td> <td>995</td> <td>-</td> </tr> <tr> <td>NSCK</td> <td>6,755</td> <td>5,220</td> <td>9</td> <td>115</td> <td>787</td> </tr> <tr> <td>NSPK</td> <td>429</td> <td>662</td> <td>7</td> <td>7</td> <td>51</td> </tr> <tr> <td>NSPK-server</td> <td>298</td> <td>604</td> <td>8</td> <td>9</td> <td>212</td> </tr> <tr> <td>SPLICE</td> <td>822</td> <td>638</td> <td>9</td> <td>14</td> <td>91</td> </tr> <tr> <td>Swick 1</td> <td>496</td> <td>258</td> <td>5</td> <td>4</td> <td>17</td> </tr> <tr> <td>Swick 2</td> <td>693</td> <td>450</td> <td>6</td> <td>8</td> <td>59</td> </tr> <tr> <td>Swick 3</td> <td>526</td> <td>482</td> <td>4</td> <td>5</td> <td>12</td> </tr> <tr> <td>Swick 4</td> <td>1,357</td> <td>1,283</td> <td>5</td> <td>15</td> <td>64</td> </tr> <tr> <td>Stubblebine rep</td> <td>1,409</td> <td>2,408</td> <td>3</td> <td>13</td> <td>2,048</td> </tr> <tr> <td>Woo-Lam M</td> <td>45,584</td> <td>27,051</td> <td>6</td> <td>481</td> <td>-</td> </tr> </tbody> </table> (K) indicates that the value in this column are given in thousands; - means that a memory out has been reached; and <1 means that the number of atoms or clauses is less than 1 thousand; To cope with the above issues it is convenient to apply the **NoCEA** encoding technique that is based on the *Abstraction Refinement Loop* described in Section 4. Of course, by disabling the generation of the conflict exclusion axioms, we are no longer guaranteed that the solutions found are linearizable and hence executable.\(^\text{15}\) SATMC therefore looks for conflicting actions in the partial order plan found and extends the previously generated encoding with clauses negating the conflicts (if any). The resulting formula is then fed back to the SAT-solver and the whole procedure is iterated until a solution without conflicts is met or the formula becomes unsatisfiable. The **NoCEA** encoding technique performs uniformly better on all the analyzed protocols. Another interesting point is that while the time spent by Chaff and SIM to solve the propositional formulae is comparable in most cases (see the columns **Last** in the table **NoCEA**),\(^\text{16}\) the number of iterations of the Abstraction Refinement Loop is in most of the cases smaller when SIM is used. This is due to the different strategies implemented by the two solvers. Both solvers incrementally build a satisfying assignment, but while SIM terminates (possibly returning a partial assignment) as soon as all the clauses are subsumed by the current satisfying assignment, Chaff avoids the subsumption check for efficiency reasons and thus halts only when the assignment is total. As a consequence the partial order plans found using Chaff usually contain more actions and thus are likely to have more conflicts. This explains why SIM is in many cases more effective than Chaff. SATMC is one of the back-ends of the AVISS tool [2]. Using the tool, the user can specify a protocol and the security properties to be checked using a high-level specification language similar to the Alice&Bob notation we used in Section 1 to present our simple authentication protocol. The AVISS tool translates the specification into a rewrite-based declarative Intermediate Format (IF) based on multiset rewriting which is amenable to formal analysis. SATMC can optionally accept protocol specifications in the IF language which are then automatically translated into equivalent planning problems. Thus, by using SATMC in combination with the AVISS tool, the user is relieved from the time-consuming and error-prone activity of providing a detailed specification of the protocol insecurity problem in terms of a planning problem. 6 Related Work The idea of regarding security protocols as planning problems is not new. An executable planning specification language \(\mathcal{ALSP}\) for representing security protocols and checking the possibility of attacks via a model finder for logic programs with stable model semantics has been proposed in [1]. Compared to this approach SATMC performs better (on the available results) and can readily exploit improvements of state-of-the-art SAT solvers. \(^{15}\) A counterexample in the model checking problem corresponds to a model of the propositional formula generated and, therefore, to solutions of the planning problem. \(^{16}\) Notice that Chaff performs better on big instances. Gavin Lowe and his group at the University of Leicester (UK) have analyzed problems from the Clark/Jacob library [10] using Casper/FDR2 [16]. This approach has been very successful for discovering new flaws in protocols. However, first experiments on the search time indicate that SATMC is more effective than Casper/FDR2. Besides that Casper/FDR2 limits the size of messages that are sent in the network and is not able to handle non-atomic keys. The Murphi model-checker has been used in [26] for analyzing some cryptographic protocols such as the Needham-Schroeder Public-Key. Experimental results indicate that Murphi suffers from state-space explosion. To cope with this problem several restrictions on the model are put in place. For instance, the size of the channel is fixed a priori. In the context of the AVISS tool we have performed a thorough comparison between the back-ends integrated in it. The On-the-Fly Model-Checker (OFMC) [6] performs uniformly well on all the Clark/Jacob library. However, it is interesting to observe that in many cases the time spent by the SAT solver is equal to or even smaller than the time spent by OFMC for the same protocol. The Constraint-Logic-based model-checker (CL) [9] is able to find type-flaw attacks (as well as OFMC). However, the overall timing of SATMC is better than that of CL. Detailed results about these experiments can be found in [2]. 7 Conclusions and Perspectives The work presented in this paper is part of a larger effort aiming at the construction of an industrial-strength SAT-based model-checker for security protocols. In this paper we have introduced an abstraction/refinement strategy which, by interleaving encoding and solving, allows to halve the size of the propositional encoding for the most complex protocols we have analyzed so far. Since even bigger savings are expected on more complex protocols, the improved procedure presented in this paper paves the way to the analysis of complex protocols (e.g. e-commerce protocols) arising in real-world applications. As far as the role of the SAT-solver is concerned, our experiments using Chaff and SIM indicate that the ability to return partial models can be as important as the pure solving time. In the future work we plan to tighten the integration with the SAT-solver in such a way that clauses generated by the refinement phase are "learnt" by the solver, thereby directly exploiting the sophisticated search strategies incorporated into the solver. However, since the solving time is still dominated by the encoding time, our current focus is on finding ways to reduce the latter. Experimental results led with COMPACT [13] indicate that the application of propositional simplification techniques reduces dramatically the dimension of the encodings thereby suggesting that they suffer from some form of redundancy. A close look to the problem reveals that this is due to the fact that linear encodings do not exploit the knowledge about the initial state and therefore they encode protocol behaviors which can be safely neglected. This led us to consider a more sophisticated encoding technique, graphplan-based encoding [21], which leads to even smaller propositional encodings [4]. Notice however that the greater generality of linear encodings is useful when analyzing protocols w.r.t. partially specified initial states. Another promising approach amounts to treating properties of cryptographic operations as invariants. Currently these properties are modeled as actions (cf. rules (1) and (2) in Section 2) and this has a bad impact on the size of the final encoding. A more natural way to deal with these properties amounts to building them into the encoding but this requires, among other things, a modification of the explanatory frame axioms. References 1. L. Carlucci Aiello and F. Massacci. Verifying security protocols as planning in 2. A. Armando, D. Basin, M. Bouallagui, Y. Chevalier, L. Compagna, S. Moeders- sheim, M. Rusinowitch, M. Turuani, L. Viganò, and L. Vigneron. The AVISS 3. A. Armando and L. Compagna. Automatic SAT-Compilation of Protocol Insecure- ness Problems via Reduction to Planning. In Intl. Conf. on Formal Techniques for Networked and Distributed Systems (FORTE), Houston, Texas, November 2002. 5. David Basin and Grit Denker. Maude versus haskell: an experimental comparison in security protocol analysis. In Kokichi Futatsugi, editor, Electronic Notes in 6. David Basin, Sebastian Moedersheim, and Luca Viganò. An On-the-Fly Model- 7. D. Bolignano. Towards the formal verification of electronic commerce protocols. In 8. Cervesato, Durgin, Mitchell, Lincoln, and Scedrov. Relating strands and multi- set rewriting for security protocol analysis. In PCSFW: Proceedings of The 13th In Proceedings of the Automated Software Engineering Conference (ASE'01). IEEE drareview.ps.gz. 11. E. Clarke, A. Gupta, J. Kukula, and O. Strichman. SAT based abstraction- refinement using ILP and machine learning techniques. In Proc. of Conference on Computer-Aided Verification (CAV’02), LNCS, 2002. 13. James M. Crawford and L. D. Anton. Experimental results on the crossover point in satisfiability problems. In Richard Fikes and Wendy Lehnert, editors, Proceedings of
{"Source-Url": "http://klase.fbk.eu/paper_firb_astro_klase/unige/P57.pdf", "len_cl100k_base": 11993, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 94811, "total-output-tokens": 15920, "length": "2e13", "weborganizer": {"__label__adult": 0.0004818439483642578, "__label__art_design": 0.0004854202270507813, "__label__crime_law": 0.00145721435546875, "__label__education_jobs": 0.00091552734375, "__label__entertainment": 0.0001285076141357422, "__label__fashion_beauty": 0.00023818016052246096, "__label__finance_business": 0.0006022453308105469, "__label__food_dining": 0.0005168914794921875, "__label__games": 0.0010728836059570312, "__label__hardware": 0.0017595291137695312, "__label__health": 0.0009784698486328125, "__label__history": 0.0004427433013916016, "__label__home_hobbies": 0.00017213821411132812, "__label__industrial": 0.001071929931640625, "__label__literature": 0.0004489421844482422, "__label__politics": 0.0006046295166015625, "__label__religion": 0.0006537437438964844, "__label__science_tech": 0.406982421875, "__label__social_life": 0.00014281272888183594, "__label__software": 0.01434326171875, "__label__software_dev": 0.56494140625, "__label__sports_fitness": 0.0003695487976074219, "__label__transportation": 0.0008130073547363281, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49598, 0.04982]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49598, 0.27561]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49598, 0.83449]], "google_gemma-3-12b-it_contains_pii": [[0, 2625, false], [2625, 5420, null], [5420, 9588, null], [9588, 12623, null], [12623, 15576, null], [15576, 18605, null], [18605, 22048, null], [22048, 26128, null], [26128, 29766, null], [29766, 32899, null], [32899, 36691, null], [36691, 39900, null], [39900, 43081, null], [43081, 46501, null], [46501, 49598, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2625, true], [2625, 5420, null], [5420, 9588, null], [9588, 12623, null], [12623, 15576, null], [15576, 18605, null], [18605, 22048, null], [22048, 26128, null], [26128, 29766, null], [29766, 32899, null], [32899, 36691, null], [36691, 39900, null], [39900, 43081, null], [43081, 46501, null], [46501, 49598, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49598, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49598, null]], "pdf_page_numbers": [[0, 2625, 1], [2625, 5420, 2], [5420, 9588, 3], [9588, 12623, 4], [12623, 15576, 5], [15576, 18605, 6], [18605, 22048, 7], [22048, 26128, 8], [26128, 29766, 9], [29766, 32899, 10], [32899, 36691, 11], [36691, 39900, 12], [39900, 43081, 13], [43081, 46501, 14], [46501, 49598, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49598, 0.09804]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
036cad5520c432501cd4f87c21967ddd685cf9a0
Abstract. The knowledge base paradigm aims to express domain knowledge in a rich formal language, and to use this domain knowledge as a knowledge base to solve various problems and tasks that arise in the domain by applying multiple forms of inference. As such, the paradigm applies a strict separation of concerns between information and problem solving. In this paper, we analyze the principles and feasibility of the knowledge base paradigm in the context of an important class of applications: interactive configuration problems. In interactive configuration problems, a configuration of interrelated objects under constraints is searched, where the system assists the user in reaching an intended configuration. It is widely recognized in industry that good software solutions for these problems are very difficult to develop. We investigate such problems from the perspective of the KB paradigm. We show that multiple functionalities in this domain can be achieved by applying different forms of logical inferences on a formal specification of the configuration domain. We report on a proof of concept of this approach in a real-life application with a banking company. 1 Introduction In this paper, we investigate the application of knowledge representation and reasoning to the problem of interactive configuration. In the past decades enormous progress in many different areas of computational logic was obtained. This resulted in a complex landscape with many declarative paradigms, languages and communities. One issue that fragments computational logic is the reasoning/inference task. Computational logic is divided in different declarative paradigms, each with its own syntactical style, terminology and conceptuology, and designated form of inference (e.g., deductive logic, logic programming, abductive logic programming, databases (query inference), answer set programming (answer set generation), constraint programming, etc.). Yet, in all of them declarative propositions need to be expressed. Take, e.g., “each lecture takes place at some time slot”. This proposition could be an expression to be deduced from a formal specification if the task was a verification problem, or to be queried in a database, or it could be a constraint for a scheduling problem. It is, in the first place, just a piece of information and we see no reason why depending on the task to be solved, it should be expressed in a different formalism (classical logic, SQL, ASP, MiniZinc, etc.). The Knowledge Base (KB) paradigm [8] was proposed as an answer to this. The KB paradigm applies a strict separation of concerns to information and problem solving. A KB system allows to store information in a knowledge base, and provides a range of inference methods. With these inference methods various types of problems and tasks can be solved using the same knowledge base. As such the knowledge base is neither a program nor a description of a problem, it cannot be executed or run. It is nothing but information. However, this information can be used to solve multiple sorts of problems. Stated differently, many declarative problem solving paradigms are mono-inferential: are based on one form of inference. Instead, the KB-paradigm is multi-inferential. We believe that this implements a more natural, pure view of what declarative logic is aimed to be. The FO(·) KB project [8] is a concrete project that runs now for a number of years. Its aim is to integrate different useful language constructs and forms of inference from different declarative paradigms in one rich declarative language and a KB system. So far, it has led to the KB language FO(·) and the KB system IDP which were used in the configuration experiment described in this paper. An interactive configuration (IC) problem [9] is an interactive version of a constraint solving problem. One or more users search for a configuration of objects and relations between them that satisfies a set of constraints. Industry abounds with interactive configuration problems: configuring composite physical systems such as cars and computers, insurances, loans, schedules involving human interaction, webshops (where clients choose composite objects), etc. However, building such software is renown in industry as difficult and no broadly accepted solution methods are available [3]. Building software support using standard imperative programming is often a nightmare, due to the fact that (1) many functionalities need to be provided, (2) they are complex to implement, and (3) constraints on the configuration tend to spread out over the application, in the form of snippets of code performing some computation relative to the constraint (e.g., context dependent checks or propagations) often leading to an unacceptable maintenance cost. This makes interactive configuration an excellent domain to illustrate the advantages of declarative methods over standard imperative or object-oriented programming. Our research question is: can we express the constraints of correct configurations in a declarative logic and provide the required functionalities by applying inference on this domain knowledge? This is a KRR question albeit a difficult one. In the first place, some of the domain knowledge may be complex. For an example in the context of a computer configuration problem, take the following constraint: the total memory usage of different software processes that needs to be in main memory simultaneously, may not exceed the available RAM memory. It takes an expressive knowledge representation language to (compactly and naturally) express such a constraint. Many interactive configuration problems include complex constraints: various sorts of quantification, aggregates (as illustrated above), definitions (sometimes inductive), etc. Moreover, an interactive configuration system needs to provide many functionalities: checking the validity of a fully specified configuration, correct and safe reasoning on a partially specified configuration (this involves reasoning on incomplete knowledge, sometimes with infinite or unknown domains), computing impossible values or forced values for attributes, generating sensible questions to the user, providing explanation why certain values are impossible, backtracking if the user regrets some choices, supporting the user by filling in his don’t-cares potentially taking into account a cost function, etc. That declarative methods are particularly suitable for solving this type of problem has been acknowledged before, and several systems and languages have been developed [9,17,21,23]. The first contribution of our work is the analysis of IC problems from a Knowledge Representation point of view. We show that multiple functionalities in this domain can be achieved by applying different forms of logical inferences on a formal specification of the configuration domain. We focus on a study of the different forms of inference, determining the forms of inference in terms of which the different functionalities can be supplied. The second contribution is vice versa: a study of the feasibility and usefulness of the KB paradigm in this important class of applications. The logic used in this experiment is the logic FO(·) [7], an extension of first-order logic (FO), and the system is the IDP system [6]. We discuss the complexity of (the decision problems of) the inference problems and why they are solvable, despite the high expressivity of the language and the complexity of inference. This research has its origin in an experimental IC system we developed in collaboration with industry. 2 The FO(·) KB Project The Language. FO(·) refers to the class of extensions of first order logic (FO) as is common in logic, e.g. FO(LFP) stands for the extension of FO with a least fixpoint construction [11]. Currently, the language of the IDP system in the project is FO(T, ID, Agg, arit, PF) [7,14]: FO extended with types, definitions, aggregates, arithmetic and partial functions. In this project we will use the subset language FO(T, Agg, arit, PF). Abusing notation, we will use FO(·) as an abbreviation for this language. Below, we introduce the aspects of the logic and its syntax on which this paper relies. A vocabulary is a set $\Sigma$ of type, predicate and function symbols. Variables $x, y$, atoms $A$, FO-formulas $\varphi$ are defined as usual. Aggregate terms are of the form $\text{Agg}(E)$, with $\text{Agg}$ an aggregate function symbol and $E$ an expression $\{(\pi, F(\pi)) | \varphi(\pi)\}$, where $\varphi$ can be any FO-formula, $F$ a function symbol and $\pi$ a tuple of variables. Examples are the cardinality, sum, product, maximum and minimum aggregate functions. For example $\text{sum}\{(x, F(x)) | \varphi(x)\}$ is read as $\sum_{x \in \{y | \varphi(y)\}} F(x)$. A term in FO(·) can be an aggregate term or a term as defined in FO. A theory is a set of FO(·) formulas. A partial set on domain $D$ is a function from $D$ to $\{t, u, f\}$. A partial set is two-valued (or total) if $u$ does not belong to its range. A (partial) structure $S$ consists of a domain $D_τ$ for all types $τ$ in the vocabulary $Σ$ and an assignment of a partial set $σ^S$ to each symbol $σ ∈ Σ$, called the interpretation of $σ$ in $S$. The interpretation $P^S$ of a predicate symbol $P$ with type $[τ_1, ..., τ_n]$ in $S$ is a partial set on domain $D_{τ_1} × ... × D_{τ_n}$. For a function $F$ with type $[τ_1, ..., τ_n] → τ_{n+1}$, the interpretation $F^S$ of $F$ in $S$ is a partial set on domain $D_{τ_1} × ... × D_{τ_n} × D_{τ_{n+1}}$. In case the interpretation of $σ$ in $S$ is a two-valued set, we abuse notation and use $σ^S$ as shorthand for $\{d|σ^S(d) = t\}$. The precision-order on the truth values is given by $u <_p f$ and $u <_p t$. It can be extended pointwise to partial sets and partial structures, denoted $S ≤_p S'$. Notice that total structures are the maximally precise ones. We say that $S'$ extends $S$ if $S ≤_p S'$. A total structure $S$ is called functionally consistent if for each function $F$ with type $[τ_1, ..., τ_n] → τ_{n+1}$, the interpretation $F^S$ is the graph of a function $D_{τ_1} × ... × D_{τ_n} → D_{τ_{n+1}}$. A partial structure $S$ is functionally consistent if it has a functionally consistent two-valued extension. Unless stated otherwise, we will assume for the rest of this paper that all (partial) structures are functionally consistent. A domain atom (domain term) is a tuple of a predicate symbol $P$ (a function symbol $F$) and a tuple of domain elements $(d_1, ..., d_n)$. We will denote it as $P(d_1, ..., d_n)$ (respectively $F(d_1, ..., d_n)$). We say a domain term $t$ of type $τ$ is uninterpreted in $S$ if $\{d|d ∈ D_τ ∧ (t = d)^S = u\}$ is non-empty. To define the satisfaction relation on theories, we extend the interpretation of symbols to arbitrary terms and formulas using the Kleene truth assignments [12]. For a theory $T$ and a partial structure $S$, we say that $S$ is a model of $T$ (or in symbols $S ⊧ T$) if $T^S = t$ and $S$ is two-valued. **Inference Tasks.** In the KB paradigm, a specification is a bag of information. This information can be used for solving various problems by applying a suitable form of inference on it. $FO$ is standardly associated with deduction inference: a deductive inference task takes as input a pair of theory $T$ and sentence $φ$, and returns $t$ if $T ⊧ φ$ and $f$ otherwise. This is well-known to be undecidable for $FO$, and by extension for $FO(·)$. However, to provide the required functionality of an interactive configuration system we can use simpler forms of inference. Indeed, in many such domains a fixed finite domain is associated with each unknown configuration parameter. A natural format in logic to describe these finite domains is by a partial structure with a finite domain. Also other data that are often available in such problems can be represented in that structure. As such various inference tasks are solvable by finite domain reasoning and become decidable. Below, we introduce base forms of inference and recall their complexity when using finite domain reasoning. We assume a fixed vocabulary $Σ$ and theory $T$. **Modelcheck**( $T, S$): input: a total structure $S$ and theory $T$ over the vocabulary interpreted by $S$; output is the boolean value $S ⊧ T$. Complexity is in $P$. **Modelexpand**( $T, S$): input: theory $T$ and partial structure $S$; output: a model $I$ of $T$ such that $S ≤_p I$ or UNSAT if there is no such $I$. Modelexpand [24] is a generalization for FO(·) theories of the modelexpansion task as defined in Mitchell et al. [13]. Complexity of deciding the existence of a modelexpansion is in NP. **Optimize**(T, S, t): input: a theory T, a partial structure S and a term t of numerical type; output: a model I ≥ₚ S of T such that the value t'I of t is minimal. This is an extension to the modelexpand inference. The complexity of deciding that a certain t'I is minimal, is in Δ²ₚ. **Propagate**(T, S): input: theory T and partial structure S; output: the most precise partial structure S_r such that for every model I ≥ₚ S of T it is true that I ≥ₚ S_r. The complexity of deciding that a partial structure S' is S_r is in Δ²ₚ. Note that we assume that all partial structures are functionally consistent, which implies that we also propagate functional constraints. **Query**(S, E): input: a (partial) structure S and a set expression E = {x | ϕ(x)}; output: the set AQ = {x | ϕ(x)S = t}. Complexity of deciding that a set A is AQ is in P. Approximative versions exist for some of these inferences, with lower complexity [23]. More inferences exist, such as simulation of temporal theories in FO(·) [4], which were not used in the experiment. ## 3 Interactive Configuration In an IC problem, one or more users search for a configuration of objects and relations between them that satisfies a set of constraints. Typically, the user is not aware of all constraints. There may be too many of them to keep track of. Even if the human user can oversee all constraints that he needs to satisfy, he is not a perfect reasoner and cannot comprehend all consequences of his choices. This in its own right makes such problems hard to solve. The problems get worse if the user does not know about the relevant objects and relations or the constraints on them, or if the class of involved objects and relations is large, if the constraints get more complex and more “irregular” (e.g., exceptions), if more users are involved, etc. On top of that, the underlying constraints in such problems tend to evolve quickly. All these complexities occur frequently, making the problem complex for a human user. In such cases, computer assistance is needed: the human user chooses and the system assists by guiding him through the search space. For a given IC problem, an IC system has information on that problem. There are a number of stringent rules to which a configuration should conform, and besides this there is a set of parameters. Parameters are the open fields in the configuration that need to be filled in by the user or decided by the system. ### 3.1 Running Example: Domain Knowledge A simplified version of the application in Sect. 5.1 is used in Sect. 4 as running example. We introduce the domain knowledge of this example here. Table 1. Example data <table> <thead> <tr> <th>PriceOf</th> <th>PreReq</th> <th>MaxCost</th> <th>IsOS</th> </tr> </thead> <tbody> <tr> <td>software</td> <td>int</td> <td>software</td> <td>employee</td> </tr> <tr> <td>Windows</td> <td>60</td> <td>Office</td> <td>Windows</td> </tr> <tr> <td>Linux</td> <td>20</td> <td>\LaTeX</td> <td>Linux</td> </tr> <tr> <td>\LaTeX</td> <td>10</td> <td></td> <td></td> </tr> <tr> <td>Office</td> <td>30</td> <td></td> <td></td> </tr> <tr> <td>DualBoot</td> <td>40</td> <td></td> <td></td> </tr> </tbody> </table> Example 1. Software on a computer has to be configured for different employees. Available software includes operating systems, editors and text processors. Each software has a price. Some software is required for other software. If more than one OS is needed, a DualBoot System is required. The software, the requirements, the budgets of the employees and the prices of software can be seen in Table 1. 3.2 Subtasks of an Interactive Configuration System Any system assisting a user in interactive configuration must be able to perform a set of subtasks. We look at important subtasks that an interactive configuration system should support. Subtask 1: Acquiring Information From the User. The first task of an IC system is acquiring information from the user. The system needs to get a value for a number of parameters of the configuration from the user. There are several options: the system can ask questions to the user, it can make the user fill in a form containing open text fields, dropdown-menus, checkboxes, etc. Desirable aspects would be to give the user the possibility to choose the order in which he gives values for parameters and to omit filling in certain parameters (because he does not know or does not care). For example, in the running example a user might need a \LaTeX-package, but he does not care about which OS he uses. In that case the system will decide in his place that a Linux system is required. Since a user is not fully aware of all constraints, it is possible that he inputs conflicting information. This needs to be handled or avoided. Subtask 2: Generating Consistent Values for a Parameter. After a parameter is selected (by the user or the system) for which a value is needed, the system can assist the user in choosing these values. A possibility is that the system presents the user with a list of all possible values, given the values for other parameters and the constraints of the configuration problem. Limiting the user with this list makes that the user is unable to input inconsistent information. Subtask 3: Propagation of Information. Assisting the user in choosing values for the parameters, a system can use the constraints to propagate the information that the user has communicated. This can be used in several ways. A system can communicate propagations through a GUI, for example by coloring certain fields red or graying out certain checkboxes. Another way is to give a user the possibility to explicitly ask "what if"-questions to the system. In Example 1, a user can ask the system what the consequences are if he was a secretary choosing an Office installation. The system answers that in this case a Windows installation is required, which results in a Linux installation becoming impossible (due to budget constraints) and as a consequence it also derives the impossibility of installing \textsc{LaTeX}. Subtask 4: Checking the Consistency for a Value. When it is not possible/desirable to provide a list of possible values, the system checks that the value the user has provided is consistent with the known data and the constraints. Subtask 5: Checking a Configuration. If a user makes manual changes to a configuration, the system provides him with the ability to check if his updated version of the configuration still conforms to all constraints. Subtask 6: Autocompletion. If a user has finished communicating all his preferences, the system autocompletes the partial configuration to a full configuration. This can be done arbitrarily (a value for each parameter such that the constraints are satisfied) or the user can have some other parameters like a total cost, that have to be optimized. Subtask 7: Explanation. If a supplied value for a parameter is not consistent with other parameters, the system can explain this inconsistency to the user. This can be done by showing minimal sets of parameters with their values that are inconsistent, or by showing (visualizations of) constraints that are violated. It can also explain to the user why certain automatic choices are made, or why certain choices are impossible. Subtask 8: Backtracking. It is not unthinkable that a user makes a mistake, or changes his mind after seeing consequences of choices he made. Backtracking is an important subtask for a configuration system. Backtracking can be supported in numerous ways. The simplest way is a simple back button, where the last choice a user made is reverted. A more involved option is a system where a user can select any parameter and erase his value for that parameter. The user can then decide this parameter at a later time-point. Even more complex is a system where a user can supply a value for a parameter and if it is not consistent with other parameters the system shows him which parameters are in conflict and proposes other values for these parameters such that consistency can be maintained. 4 Interactive Configuration in the KB Paradigm To analyze the IC problem from the KB point of view, we aim at formalizing the subtasks of Sect. 3 as inferences. In this paper we do not deal with user interface aspects. For a given application, our knowledge base consists of a vocabulary \( \Sigma \), a theory \( T \) expressing the configuration constraints and a partial structure \( S \). Initially, \( S_0 \) is the \( \Sigma \)-partial structure that contains the domains of the types and the input data. During IC, \( S_0 \) will become more and more precise partial structures \( S_i \) due to choices made by the user. For IC, the KB also contains \( L_{S_0} \), the set of all uninterpreted domain atoms/terms\(^1\) in \( S_0 \). These domain terms are the logical formalization of the parameters of the IC problem. \( \Sigma \) and \( T \) are fixed. As will be shown in this section, all subtasks can be formalized by (a combination of) inferences on this knowledge base consisting of \( \Sigma, T, S_0, L_{S_0} \) and information gathered from the user. **Example 2.** Continuing Example 1, use vocabulary \( \Sigma \) consisting of types: software, employee and int (integers), predicates Install(software), IsOS(software) and PreReq(software,software), functions PriceOf(software):int, MaxCost(employee):int and two constants Requester: employee and Cost: int. The initial partial structure \( S_0 \) consists of \{employee\( \rightarrow \) \{Secretary, Manager\}, software \( \rightarrow \) \{Windows, Linux, LaTeX, Office, DualBoot\}\} and interpretations for MaxCost (employee):int, IsOs(software), PriceOf(software):int and PreReq(software, software) as can be seen in Table 1. The set of parameters \( L_{S_0} \) is \{Requester, Install(Windows), Install(Linux), Install(Office), Install(LaTeX), Install(DualBoot), Cost\}. The theory \( T \) consists of the following constraints: \[ \forall s_1 s_2 : \text{Install}(s_1) \land \text{PreReq}(s_1, s_2) \Rightarrow \text{Install}(s_2). \] // The total cost is the sum of the prices of all installed software. \[ \text{Cost} = \text{sum}\{s, \text{PriceOf}(s)||\text{Install}(s)\}. \] \[ \exists s : \text{Install}(s) \land \text{IsOS}(s). \] \[ \text{Install}(\text{Windows}) \land \text{Install}(\text{Linux}) \Rightarrow \text{Install}(\text{DualBoot}). \] **Subtask 1: Acquiring Information From the User.** Key in IC is collecting information from the user on the parameters. During the run of the system, the set of parameters that are still open, changes. In our KB system, a combination of the inferences introduced in Sect. 2, which is called a derived inference, is used to calculate this set of parameters. **Definition 1.** Calculating Uninterpreted Terms. GetOpenTerms(\( T, S \)) is the derived inference with input a theory \( T \), a partial structure \( S \geq p S_0 \) and the set \( L_{S_0} \) of terms. Output is a set of terms such that for every term \( t \) in that set, there exist models \( I_1 \) and \( I_2 \) of \( T \) that expand \( S \) for which \( t^{I_1} \neq t^{I_2} \). Or formally: \[ \{l | l \in L_{S_0} \land \{d | (l = d)^{S'} = u\} \neq \emptyset \land S' = \text{Propagate}(T, S)\} \] The complexity of deciding whether a given set of terms \( A \) is the set of uninterpreted terms is in \( \Delta^P_2 \). \(^1\)In the rest of this paper, a domain atom is treated as a term that evaluates to true or false. An IC system can use this set of terms in a number of ways. It can use a metric to select a specific term, which it can pose as a direct question to the user. It can also present a whole list of these terms at once and let the user pick one to supply a value for. In Sect. 5.1, we discuss two different approaches we implemented for this project. Example 3. In Example 2, the parameters and domains are already given. Assume that the user has chosen the value Manager for Requester, true for Install(Windows) and false for Install(Linux). The system will return $\text{GetOpenTerms}(T, S) = \{\text{Install(Office)}, \text{Install(DualBoot)}, \text{Cost}\}$. Subtask 2: Generating Consistent Values for a Parameter. A domain element $d$ is a possible value for term $t$ if there is a model $I \geq p S$ such that $(t = d)^I = t$ Definition 2. Calculating Consistent Values. $\text{GetConsistentValues}(T, S, t)$ is the derived inference with as input a theory $T$, a partial structure $S$ and a term $t \in \text{GetOpenTerms}(T, S)$. Output is the set $$\{t^I \mid I \text{ is a model of } T \text{ expanding } S\}$$ The complexity of deciding that a set $P$ is the set of consistent values for $t$ is in $\Delta^P_2$. Example 4. The possible values in the initial partial structure $S_0$ are: \{Secretary, Manager\} for Requester, the integers for Cost and \{true, false\} for the others. Subtask 3: Propagation of Information. It is informative for the user that he can see the consequences of assigning a particular value to a parameter. Definition 3. Calculating Consequences. $\text{PosConsequences}(T, S, t, a)$ and $\text{NegConsequences}(T, S, t, a)$ are derived inferences with input a theory $T$, a partial structure $S$, an uninterpreted term $t \in \text{GetOpenTerms}(T, S)$ and a domain element $a \in \text{GetConsistentValues}(T, S, t)$. As output it has a set $C^+$, respectively $C^-$ of tuples $(q, b)$ of uninterpreted terms and domain elements. $(q, b) \in C^+$, respectively $C^-$ means that the choice $a$ for $t$ entails that $q$ will be forced, respectively prohibited to be $b$. Formally, $$C^+ = \{(q, b) \mid (q = b)^S' = t \wedge (q = b)^S = u$$ $$\wedge S' = \text{Propagate}(T, S \cup \{t = a\})$$ $$\wedge q \in \text{GetOpenTerms}(T, S) \backslash \{t\} \}$$ $$C^- = \{(q, c) \mid (q = c)^S' = f \wedge (q = c)^S = u$$ $$\wedge S' = \text{Propagate}(T, S \cup \{t = a\})$$ $$\wedge q \in \text{GetOpenTerms}(T, S) \backslash \{t\} \}$$ The complexity of deciding whether a set $P$ is $C^+$ or $C^-$ is in $\Delta^P_2$. Example 5. Say the user has chosen Requester : Secretary and wants to know the consequences of making Install(Windows) true. The output in this case contains (Install(LaTeX),f) in PosConsequences(T,S,t,a) and (Install(LaTeX),t) in NegConsequences(T,S,t,a) since this combination is too expensive for a secretary. Note that there is not always such a correspondence between the positive and negative consequences. For example, when deriving a negative consequence for Cost, this does not necessarily imply a positive consequence. Subtask 4: Checking the Consistency for a Value. A value d for a term t is consistent if there exists a model of T in which t = d that extends the partial structure representing the current state. Definition 4. Consistency Checking. CheckConsistency(T,S,t,d) is the derived inference with as input a theory T, a partial structure S, an uninterpreted term t and an domain element d. Output is a boolean b that represents if S extended with t = d still satisfies T. Complexity of deciding if a value d is consistent for a term t is in \( \text{NP} \). Subtask 5: Checking a Configuration. Once the user has constructed a 2-valued structure S and makes manual changes to it, he may need to check if all constraints are still satisfied. A theory T is checked on a total structure S by calling Modelcheck(T,S), with complexity in \( \text{P} \). Subtask 6: Autocompletion. If a user is ready communicating his preferences (Subtask 1) and there are undecided terms left which he does not know or care about, the user may want to get a full configuration (i.e. a total structure). This is computed by modelexpand. In particular: \[ I = \text{Modelexpand}(T,S) \] In many of those situations the user wants to have a total structure with a minimal cost (given some term representing the cost t). This is computed by optimize: \[ I = \text{Optimize}(T,S,t). \] Example 6. Assume the user is a secretary and all he knows is that he needs Office. He chooses Secretary for Requester and true for Install(Office) and calls autocompletion. A possible output is a structure S where for the remaining parameters, a choice is made that satisfies all constraints, e.g., Install(Windows) \[ S = t, \] Install(DualBoot) \[ S = t \] and the other Install atoms false. This is not a cheapest solution (lowest cost). By calling optimize using cost-term Cost, the DualBoot is dropped. Subtask 7: Explanation. It is clear that a whole variety of options can be developed to provide different kinds of explanations to a user. If a user supplies an inconsistent value for a parameter, options can range from calculating a minimal inconsistent subset of the theory T as in [18,20,25], to giving a proof of inconsistency as in [15], to calculating a minimally precise partial configuration that has this inconsistency. We look at a derived logical inference for this last option. **Definition 5. Calculating Minimal Inconsistent Structures.** \textit{UnsaStructure}(T, S) is a derived inference with as input a theory T and a partial structure S that cannot be extended to a model of T and as output all minimal (partial) structures S’ ≤_p S such that S’ cannot be extended to a model I of T. Formally\(^2\), we return: \[ \min_{≤_p} \{ S’ | S’ ≤_p S ∧ ¬(∃I ≥_p S’ ∧ I ⊨ T) \} \] Complexity of deciding if a set is the set of minimal inconsistent structures is \(\Delta^2_p\). **Example 7.** Say a secretary wants to install all software. This is not possible, so he asks for an explanation. Running \textit{UnsatStructure} on the theory of Example 2 and that structure extended with Requester = Secretary and Install(software) true for all software will return a set with among others a structure in which Requester = Secretary, Install(Windows) and Install(Linux) are true, since a secretary does not have the budget to install two operating systems. **Subask 8: Backtracking.** If a value for a parameter is not consistent, the user has to choose a new value for this parameter, or backtrack to revise a value for another parameter. In Sect. 3.2 we discussed three options of increasing complexity for implementing backtracking functionality. Erasing a value for a parameter is easy to provide in our KB system, and since this is a generalization of a back button (erasing the last value) we have a formalization of the first two options. Erasing a value \(d\) for parameter \(t\) in a partial structure \(S\) is simply modifying \(S\) such that \((t = d)^S = u\). As with explanation, a number of more complex options can be developed. We look at one possibility. Given a partial configuration \(S\), a parameter \(p\) and a value \(d\) that is inconsistent for that parameter, calculate a minimal set of previous choices that need to be undone such that this value is possible for this parameter. We can use Definition 5 and calculate \textit{UnsatStructure}(T \land (t = d), S). This inference calculates a set of minimal sets of previous choices that need to be undone. Backtracking over one of the sets in this set results in a maximal partial subconfiguration \(S’\) of \(S\) such that \(d\) is a possible value for \(t\) in \(S’\). ## 5 Proof of Concept ### 5.1 Implementation **Overview.** During the configuration process, the user specifies his choices step-by-step. As argued in the introduction, a configurator tool can support the user in many ways: displaying the cost of the current partial configuration, propagating the impact of the choices of the user, presenting remaining possible values \(^2\) We note that \(≤_p\) is a partial order and denote \(\min_{≤_p}\) for all minimal elements of a set according to that order. for variables, explaining why certain choices are impossible, checking validity of a finished configuration, completing the don’t cares of a user (potentially optimizing a cost function), etc. This work started as a feasibility study about using a KB system for solving interactive configuration problems. In this section we will describe the developed application and implementation, based on the IDP system for the back-end, together with a GUI made in QML [16] as front-end.³ This was done in cooperation with Adaptive Planet, a consulting company [1] which developed the user interface and an international banking company, who provided us with a substantial configuration problem to test our implementation. The goal was to develop a highly customizable application, for general configuration problems. The GUI is a blank canvas, which is unaware of the configuration problem at hand. The IDP KB system has a knowledge base (a theory), containing all domain knowledge, and a set with all parameters (uninterpreted terms) and IDP is used for all the inferences on that knowledge base, which provide the functionalities of the subtasks discussed in Sect. 4. The developed application had 300 parameters and 650 constraints. This domain knowledge was distilled from a spreadsheet that the banking company currently uses for their interactive configuration tasks. Two user interfaces are available for the user to choose from: **Wizard.** In the wizard interface, the user is interrogated and he answers on subsequent questions selected by the system, using the GetOpenTerms inference. An important side note here is that the user can choose not to answer a specific question, for instance because he cannot decide as he is missing relevant information or because he is not interested in the actual value (at this point). These parameters can be filled in at a later timepoint by the user, or the system can fill in all parameters using autocompletion. **Drill-Down.** In the drill-down interface, the user sees a list of the still open parameters, and can pick which one he wants to fill in next. This interface is useful if the user is a bit more knowledgeable about the specific configuration and wants to give the values in a specific order. In both interfaces the user is assisted in the same way when he enters data. When he or the system selects a parameter, he is provided with a dropdown list of the possible values, using the GetConsistentValues inference. Before committing to a choice, he is presented with the consequences of his choice, using the calculate consequences inference. The nature of the system guarantees a correct configuration and will automatically give the user support using all information it has (from the knowledge base, or received from the user). **Evaluation.** When evaluating the quality of software (especially when evaluating declarative methods), scalability (data complexity) is often seen as the most ³ More info about this implementation, a downloadable demo and another example of a configuration system developed with IDP as an engine (a simpler course configuration demo) can be found at: http://www.configuration.tk. important quality metric. Naturally when using an interactive configuration system, performance is important. However, in the configuration community it is known that reasoning about typical configuration problems is relatively easy and does not exhibit real exponential behavior [21]. In this experiment (a configuration task with 300 parameters and 650 constraints), our users reported a response time of a half second on average with outliers up to 2 seconds. Note that the provided implementation was a naive prototype and optimizing the efficiency of the implemented algorithms is still possible in a number of ways. Also, it is reasonable to expect the number of parameters to be limited, since humans need to fill in the configuration in the end. When developing a configuration system, challenges lie in the complexity of the knowledge, its high volatility and the complex functionalities to be built. In such cases, more relevant than scalability are the standard metrics of software engineering: providing good functionality, maintainability, reuse and extensibility. Maintainability and Reuse. The information used in an IC system is volatile, it is for example depending on ever-changing company policies. As such, it is vital that when that information changes, the system can be easily adapted. When using custom software, all tasks using domain knowledge (like rules and policies) need their own program code. The domain knowledge is scattered all over the program. If this policy changes, a programmer has to find all snippets of program code that are relevant for guarding this policy and modify them. This results in a system that is hard to maintain, hard to adapt and error-prone. Every time the domain knowledge changes, a whole development cycle has to be run through again. The development of a KB system with a centrally maintained knowledge base makes the knowledge directly available, readable and adaptable. Extensibility. Supporting all subtasks expressed above is important for a good configuration system, but it is also important to have the possibility to accommodate new subtasks. A good system should be easily extensible with new functionalities, preferably without duplicating domain knowledge. This is one of the key points of a KB system. New inferences can be developed and added, independent from the domain knowledge. Functionality. For evaluating functionality, industrial partners involved in this project have tested the proof of concept and compared with their conventional software solutions. The most common approach to developing configuration tools is building custom software. Other frequently used technology to handle interactive configuration problems are spreadsheets and business rules systems. When starting this project, the users had the following major issues with these systems, for which conceptual, general solutions were given by our approach: - **Unidirectional dataflow:** All these systems have an obligatory unidirectional dataflow. This fixes beforehand which parameters are input and which parameters are output. However, given a problem statement, it is not natural to make a distinction between input and output. Different users may have different information or different needs and regard different parameters as input. In our approach, this distinction is not made at all by our inferences. - **Incomplete knowledge:** These systems have problems reasoning with incomplete knowledge, i.e., rules and functions can only compute their result when their input is complete and they also cannot use partial knowledge to deduce (partial) new knowledge, e.g., to eliminate configuration options. Our language does by nature accommodate for partial knowledge, and is able to represent every intermediate partial configuration. These partial configurations are used by the inferences to calculate possible total configurations, consequences, etc. 6 Related Work In different branches of AI research, people have been focusing on configuration software in different settings. Axling et al. [3] represent domain knowledge in the SICStus Object Language and have a configuration system specific for configuring physical objects, e.g., computers. An ontology based method was also proposed in by Vanden Bossche et al. [22] using OWL. The first reason these approaches are less general is that it is precisely the goal of the KB paradigm to reuse the knowledge for different reasoning tasks. All these approaches are focused towards one specific inference: ontologies are focused on deduction, Prolog and rule systems are focused on backward/forward chaining, etc. Tiihonen et al. developed a configuration system WeCoTin [21] WeCoTin uses Smodels, an ASP system, as inference engine, for propagating consequences of choices. In 2004, Hadzic et al. [9] started working on solving different aspects of interactive configuration. They described solutions for these problem using knowledge compilation techniques such as binary decision diagrams (BDD) and using Boolean satisfiability solving (SAT). Hadzic et al. [9] stressed the importance of a distinction between configuration knowledge and the configuration task. This is similar to our separation of concerns by separating knowledge from computation. The authors also implemented solvers and systems for solving interactive configuration and interactive reconfiguration in later work [10]. Overall, the goal of their work is to develop different algorithms to solve different aspects of configuration problems and not to study an abstract reasoning framework in which knowledge and computation are separated. The contributions of this paper are different: we analyzed IC problems from a Knowledge Representation point of view. It is a discussion of possible approaches and the importance of this point of view. We made a study of desired functionalities for an IC system and how we can define logical reasoning tasks to supply these functionalities. In this project a more expressive language was used than in other work that we are aware of. Subbarayan et al. [19] for example use propositional logic, that is extended to CP by Andersen et al. [2]. The expressivity of the language is crucial for the usability of the approach. It allows us to address a broader range of applications, moreover it is easier to formalize and maintain the domain knowledge. A first approach in using the KB paradigm for IC, was done by Vlaeminck et al. [23], also using the FO(·) IDP project. Our work extends this, by analyzing a real-life application and discussing new functionalities. 7 Challenges and Future Work Interactive configuration problems are part of a broader kind of problems, namely service provisioning problems. Service provisioning is the problem domain of coupling service providers with end users, starting from the request until the delivery of the service. Traditionally, such problems start with designing a configuration system that allows users to communicate their wishes, for which we provided a knowledge-based solution. After all the information is gathered from a user, it is still necessary to make a plan for the production and delivery of the selected configuration. Hence the configuration problem is followed by a planning problem that shares domain knowledge with the configuration problem but that also has its own domain knowledge about providers of components, production processes, etc. This planning problem then leads to a monitoring problem. Authorisations could be required, payments need to be checked, or it could be that the configuration becomes invalid mid-process. In this case the configuration needs to be redone, but preferably without losing much of the work that is already done. Companies need software that can manage and monitor the whole chain, from initial configuration to final delivery and this without duplication of domain knowledge. This is a problem area where the KB approach holds great promise but where further research is needed to integrate the KB system with the environment that the company uses to follow up its processes. Other future work may include language extensions to better support configuration-like tasks. A prime example of this are templates [5]. Oftentimes the theory of a configuration problem contains lots of constraints which are similar in structure. It seems natural to introduce a language construct to abstract away the common parts. Another useful language extension is reification, to talk about the symbols in a specification rather than about their interpretation. Reification allows the system to reason on a meta level about the symbol and for example assign symbols to a category like “Technical” or “Administrative”. 8 Conclusion The KB paradigm, in which a strict separation between knowledge and problem solving is proposed, was analyzed in a class of knowledge intensive problems: interactive configuration problems. As we discussed why solutions for this class are hard to develop, we proposed a novel approach to the configuration problem based on an existing KB system. We analyzed the functional requirements of an IC system and investigated how we can provide these, using logical inferences on a knowledge base. We identified interesting new inference methods and applied them to the interactive configuration domain. We studied this approach in context of a large application, for which we built a proof of concept, using the KB system as an engine, which we extended with the new inferences. As proof of concept, we solved a configuration problem for a large banking company. Results are convincing and open perspectives for further research in service provisioning. References Practical Aspects of Declarative Languages 18th International Symposium, PADL 2016, St. Petersburg, FL, USA, January 18-19, 2016. Proceedings Gavanelli, M.; Reppy, J. (Eds.) 2016, X, 187 p. 29 illus. in color., Softcover ISBN: 978-3-319-28227-5
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783319282275-c2.pdf?SGWID=0-0-45-1545196-p177856133", "len_cl100k_base": 10271, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 47347, "total-output-tokens": 12649, "length": "2e13", "weborganizer": {"__label__adult": 0.00026488304138183594, "__label__art_design": 0.00041961669921875, "__label__crime_law": 0.0002663135528564453, "__label__education_jobs": 0.0007386207580566406, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00013589859008789062, "__label__finance_business": 0.00030112266540527344, "__label__food_dining": 0.00030040740966796875, "__label__games": 0.0004682540893554687, "__label__hardware": 0.0006699562072753906, "__label__health": 0.00038743019104003906, "__label__history": 0.0002155303955078125, "__label__home_hobbies": 9.083747863769533e-05, "__label__industrial": 0.0003578662872314453, "__label__literature": 0.00033402442932128906, "__label__politics": 0.00018584728240966797, "__label__religion": 0.0003614425659179687, "__label__science_tech": 0.0362548828125, "__label__social_life": 8.14199447631836e-05, "__label__software": 0.011474609375, "__label__software_dev": 0.94580078125, "__label__sports_fitness": 0.0001804828643798828, "__label__transportation": 0.0004239082336425781, "__label__travel": 0.00015079975128173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49590, 0.02327]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49590, 0.60325]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49590, 0.87941]], "google_gemma-3-12b-it_contains_pii": [[0, 2378, false], [2378, 5742, null], [5742, 9090, null], [9090, 12529, null], [12529, 15332, null], [15332, 17837, null], [17837, 20888, null], [20888, 24107, null], [24107, 26673, null], [26673, 29476, null], [29476, 32339, null], [32339, 35511, null], [35511, 38651, null], [38651, 41870, null], [41870, 44966, null], [44966, 47977, null], [47977, 49346, null], [49346, 49590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2378, true], [2378, 5742, null], [5742, 9090, null], [9090, 12529, null], [12529, 15332, null], [15332, 17837, null], [17837, 20888, null], [20888, 24107, null], [24107, 26673, null], [26673, 29476, null], [29476, 32339, null], [32339, 35511, null], [35511, 38651, null], [38651, 41870, null], [41870, 44966, null], [44966, 47977, null], [47977, 49346, null], [49346, 49590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49590, null]], "pdf_page_numbers": [[0, 2378, 1], [2378, 5742, 2], [5742, 9090, 3], [9090, 12529, 4], [12529, 15332, 5], [15332, 17837, 6], [17837, 20888, 7], [20888, 24107, 8], [24107, 26673, 9], [26673, 29476, 10], [29476, 32339, 11], [32339, 35511, 12], [35511, 38651, 13], [38651, 41870, 14], [41870, 44966, 15], [44966, 47977, 16], [47977, 49346, 17], [49346, 49590, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49590, 0.04301]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
8ac4550db63f54290d7664d9ca3a288e813f17c7
[REMOVED]
{"Source-Url": "http://www.baslijnse.nl/writing/fulltext/evolution-of-a-parallel-task-combinator.pdf", "len_cl100k_base": 9672, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 49653, "total-output-tokens": 12195, "length": "2e13", "weborganizer": {"__label__adult": 0.0002949237823486328, "__label__art_design": 0.0003032684326171875, "__label__crime_law": 0.00022161006927490232, "__label__education_jobs": 0.0007853507995605469, "__label__entertainment": 6.246566772460938e-05, "__label__fashion_beauty": 0.00011032819747924803, "__label__finance_business": 0.00013065338134765625, "__label__food_dining": 0.00025773048400878906, "__label__games": 0.0005197525024414062, "__label__hardware": 0.0005154609680175781, "__label__health": 0.00028514862060546875, "__label__history": 0.00019443035125732425, "__label__home_hobbies": 6.23464584350586e-05, "__label__industrial": 0.00022721290588378904, "__label__literature": 0.0002684593200683594, "__label__politics": 0.00015783309936523438, "__label__religion": 0.0003330707550048828, "__label__science_tech": 0.0094146728515625, "__label__social_life": 8.213520050048828e-05, "__label__software": 0.0060882568359375, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0002117156982421875, "__label__transportation": 0.00033783912658691406, "__label__travel": 0.0001531839370727539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46375, 0.01077]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46375, 0.4572]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46375, 0.90374]], "google_gemma-3-12b-it_contains_pii": [[0, 2709, false], [2709, 5924, null], [5924, 8761, null], [8761, 10321, null], [10321, 13122, null], [13122, 15237, null], [15237, 17519, null], [17519, 20360, null], [20360, 22963, null], [22963, 25994, null], [25994, 28853, null], [28853, 31808, null], [31808, 34042, null], [34042, 36113, null], [36113, 38764, null], [38764, 40102, null], [40102, 43159, null], [43159, 46375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2709, true], [2709, 5924, null], [5924, 8761, null], [8761, 10321, null], [10321, 13122, null], [13122, 15237, null], [15237, 17519, null], [17519, 20360, null], [20360, 22963, null], [22963, 25994, null], [25994, 28853, null], [28853, 31808, null], [31808, 34042, null], [34042, 36113, null], [36113, 38764, null], [38764, 40102, null], [40102, 43159, null], [43159, 46375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46375, null]], "pdf_page_numbers": [[0, 2709, 1], [2709, 5924, 2], [5924, 8761, 3], [8761, 10321, 4], [10321, 13122, 5], [13122, 15237, 6], [15237, 17519, 7], [17519, 20360, 8], [20360, 22963, 9], [22963, 25994, 10], [25994, 28853, 11], [28853, 31808, 12], [31808, 34042, 13], [34042, 36113, 14], [36113, 38764, 15], [38764, 40102, 16], [40102, 43159, 17], [43159, 46375, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46375, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0c02650657ede48d05847106f6d234c1152b5934
16.1 HISTORY OF OBJECTS The most common style of programming today is **object-oriented programming**. We’re going to define it in contrast with the procedural programming that we’ve been doing up until now. Back in the 1960s and 1970s, procedural programming was the dominant form of programming. People used **procedural abstraction** and defined lots of functions at high and low levels, and reused their functions wherever possible. This worked reasonably well—up to a point. As programs got really large and complex, with many programmers working on them at the same time, procedural programming started to break down. Programmers ran into problems with procedure conflicts. People would write programs that modified data in ways that other people didn’t expect. They would use the same names for functions and find that their code couldn’t be integrated into one large program. There were also problems in **thinking** about programs and the tasks the programs were supposed to perform. Procedures are about **verbs**—tell the computer to do this, tell the computer to do that. But it’s not clear whether that’s the way people think best about problems. Object-oriented programming is **noun-oriented programming**. Someone building an object-oriented program starts by thinking about what the nouns are in the *domain* of the problem—what are the people and things that are part of this problem and its solution? The process of identifying the objects, what each of them knows about (with respect to the problem), and what each of them has to do is called **object-oriented analysis**. Programming in an object-oriented way means that you define variables (called **instance variables**) and functions (called **methods**) for the objects. In the most object-oriented languages, programs have very few or even no global functions or variables—things that are accessible everywhere. In the original object-oriented programming language, Smalltalk, objects could only get things done by asking each other to do things via their methods. Adele Goldberg, one of the pioneers of object-oriented programming, calls this “Ask, don’t touch.” You can’t just “touch” data and do whatever you want with it—instead, you “ask” objects to manipulate their data through their methods. That is a good goal even in languages like Python or Java where objects can manipulate each others’ data directly. The term object-oriented programming was invented by Alan Kay. Kay is a brilliant multidisciplinary character—he holds undergraduate degrees in mathematics and biology, a Ph.D. in computer science, and has been a professional jazz guitarist. In 2004, he was awarded the ACM Turing Award, which is sort of the Nobel Prize of computing. Kay saw object-oriented programming as a way of developing software that could truly scale to large systems. He described objects as being like biological *cells* that work together in well-defined ways to make the whole organism work. Like cells, objects would: - Help manage complexity by distributing responsibility for tasks across many objects rather than one big program. - Support robustness by making the objects work relatively independently. - Support reuse because each object would provide services to other objects (tasks that the object would do for other objects, accessible through its methods), just as real-world objects do. The notion of starting from nouns is part of Kay’s vision. Software, he said, is actually a *simulation* of the world. By making software model the world, it becomes clearer how to make software. You look at the world and how it works, then copy that into software. Things in the world know things—these become **instance variables**. Things in the world can do things—these become **methods**. Of course, we’ve been using objects already. Pictures, sounds, samples, and colors are all objects. Our lists of pixels and samples are examples of **aggregation**, which is creating collections of objects. The functions we’ve been using are actually just covering up the underlying methods. We can just call the objects’ methods directly, which we will do later in this chapter. 16.2 WORKING WITH TURTLES Seymour Papert, at MIT, used robot turtles to help children think about how to specify procedures in the late 1960s. The turtle had a pen in the middle of it that could be raised and lowered to leave a trail of its movements. As graphical displays became available, he used a virtual turtle on a computer screen instead of a robotic turtle. Part of the media support in JES provides graphical turtle objects. Turtles make a great introduction to the ideas of objects. We manipulate turtle objects that move around a world. The turtles know how to move and turn. The turtles have a pen in the middle of them that leaves a trail to show their movements. The world keeps track of the turtles that are in it. 16.2.1 Classes and Objects How does the computer know what we mean by a turtle and a world? We have to define what a turtle is, what it knows about, and what it can do. We have to define what a world is, what it knows about, and what it can do. In Python we do this by defining classes. A class defines what things or objects (instances) of that class know and can do. The media package for JES defines classes that define what we mean by a turtle and a world. Object-oriented programs consist of objects. We create objects from classes. The class knows what each object of that class needs to keep track of and what it should be able to do. You can think of a class as an object factory. The factory can create many objects. A class is also like a cookie cutter. You can make many cookies from one cookie cutter and they will all have the same shape. Or you can think of the class as a blueprint and the objects as the houses that you can create from the blueprint. To create and initialize a world you use `makeWorld()`. To create a turtle object, you can use `makeTurtle(world)`. That looks pretty similar to `makePicture` and `makeSound`—there is a pattern here, but we will introduce a new one, a more standard Python syntax in just a bit. Let’s create a new world object. ```python >>> makeWorld() ``` This will create a world object and display a window that shows the world. It will just start as an all-white picture in a frame titled, “World.” But we can’t refer to it since we didn’t name it. Here we name the world object that gets created `earth`, and then create a turtle object in the world named `earth`. We will name the turtle object `tina`. ```python >>> earth = makeWorld() >>> tina = makeTurtle(earth) >>> print tina No name turtle at 320, 240 heading 0.0. ``` The turtle object appears in the center of the world (320, 240) and facing north (a heading of 0) (Figure 16.1). The turtle hasn’t been assigned a name yet. The turtle support in JES allows us to create many turtles. Each new turtle will appear in the center of the world. Section 16.2 Working with Turtles 16.2.2 Sending Messages to Objects We can ask the turtle to do things by sending a message to the turtle object, which we also think of as calling a method on an object. We do this using dot notation. In dot notation we ask an object to do something by specifying the name of the object and then a ‘.’ and then the function to execute (name.function(parameterList)). We saw dot notation with strings in Section 10.3.1. >>> tina.forward() >>> tina.turnRight() >>> tina.forward() Notice that only the turtle that we asked to do the actions moves (Figure 16.2). We can make the other one move by asking it to do things as well. >>> sue.turnLeft() >>> sue.forward(50) Notice that different turtles have different colors (Figure 16.3). As you can see turtles know how to turn left and right, using turnLeft() and turnRight(). They also can go forward in the direction they are currently heading using forward(). By default they go forward 100 pixels, but you can also specify how many pixels to go forward, FIGURE 16.2 Asking one turtle to move and turn, while the other one remains. FIGURE 16.3 After the second turtle moves. Section 16.2 Working with Turtles FIGURE 16.4 Turning a specified amount (−45). forward(50). Turtles know how to turn a given number of degrees as well. Positive amounts turn the turtle to the right and negative amounts to the left (Figure 16.4). >>> tina.turn(-45) >>> tina.forward() 16.2.3 Objects Control Their State In object-oriented programming, we send messages to ask objects to do things. The objects can refuse to do what you ask. An object should refuse if you ask it to do something that would cause its data to be wrong. The world that the turtles are in is 640 pixels wide by 480 high. What happens if you try to tell the turtle to go past the end of the world? >>> world1 = makeWorld() >>> turtle1 = makeTurtle(world1) >>> turtle1.forward(400) >>> print turtle1 No name turtle at 320, 0 heading 0.0. Turtles are first positioned at (320, 240) heading north (up). In the world the top left position is (0, 0) and x increases to the right and y increases going down. By asking the turtle to go forward 400, we are asking it to go to (320, 240 − 400) which would result in a position of (320, −160). But, the turtle refuses to leave the world and instead stops when the center of the turtle is at \((320, 0)\) (Figure 16.5). This means we won’t lose sight of any of our turtles. The point of this exercise is to show how methods control access to the object’s data. If you do not want variables to have certain values in its data, you control that through the methods. The methods serve as the gateway to and gatekeeper for the object’s data. Turtles can do lots of other things as well as go forward and turn. As you have probably noticed when the turtles move they draw a line that is the same color as the turtle. You can ask the turtle to pick up the pen using \texttt{penUp()}\). You can ask the turtle to put down the pen using \texttt{penDown()}\). You can ask the turtle to move to a particular position using \texttt{moveTo(x,y)}\). If the pen is down when you ask the turtle to move to a new position, the turtle will draw a line from the old position to the new position (Figure 16.6). ```python >>> worldX = makeWorld() >>> turtleX = makeTurtle(worldX) >>> turtleX.penUp() >>> turtleX.moveTo(0,0) >>> turtleX.penDown() >>> turtleX.moveTo(639,479) ``` You can change the color of a turtle using \texttt{setColor(color)}\). You can stop drawing the turtle using \texttt{setVisible(false)}\). You can change the width of the pen using \texttt{setPenWidth(width)}\). 16.3 TEACHING TURTLES NEW TRICKS We have already defined a Turtle class for you. But, what if you want to create your own type of turtle and teach it to do new things? We can create a new type of turtle that will understand how to do all the things that turtle knows how to do, and we can also add some new functionality. This is called creating a subclass. Just like children inherit eye color from their parents, our subclass will inherit all the things that turtles know and can do. A subclass is also called a child class and the class that it inherits from is called the parent class or superclass. We call our subclass SmartTurtle. We add a method that allows our turtle to draw a square. Methods are defined just like functions, but they are inside the class. Methods in Python always take as input a reference to the object of the class that the method is called on (usually called self). To draw a square our turtle will turn right and go forward 4 times. Notice that we inherit the ability to turn right and go forward from the Turtle class. **Program 147: Defining a Subclass** ```python class SmartTurtle(Turtle): def drawSquare(self): for i in range(0, 4): self.turnRight() self.forward() ``` **FIGURE 16.6** Using the turtle to draw a diagonal line. ![Diagonal line drawn by a turtle](image) Since the SmartTurtle is a kind of Turtle, we can use it in much the same way. But, we will need to create the SmartTurtle in a new way. We have been using makePicture, makeSound, makeWorld, and makeTurtle to make our objects. These are functions we have created to make it easier to make these objects. But, the actual way in Python to create a new object is to use ClassName(parameterList). To create a world you can use worldObj = World() and to create a SmartTurtle you can use turtleObj = SmartTurtle(worldObj). ```python >>> earth = World() >>> smarty = SmartTurtle(earth) >>> smarty.drawSquare() ``` Our SmartTurtle now knows how to draw a square (Figure 16.7). But, it can only draw squares of size 100. It would be nice to be able to draw different size squares. We can add another function that takes a parameter that specifies the width of the square. ```python class SmartTurtle(Turtle): def drawSquare(self): for i in range(0,4): ``` ![Figure 16.7](image_url) Drawing a square with our SmartTurtle. Section 16.3 Teaching Turtles New Tricks ```python self.turnRight() self.forward() def drawSquare(self, width): for i in range(0, 4): self.turnRight() self.forward(width) ``` You can use this to draw different size squares (Figure 16.8). ```python >>> mars = World() >>> tina = SmartTurtle(mars) >>> tina.drawSquare(30) >>> tina.drawSquare(150) >>> tina.drawSquare(100) ``` ![Figure 16.8](image) **Figure 16.8** Drawing different size squares. ### 16.3.1 Overriding an Existing Turtle Method A subclass can redefine a method that already exists in the superclass. You might do this to create a specialized form of the existing method. Here’s the class ConfusedTurtle, which redefines `forward` and `turn` so that it does the `Turtle` class’s `forward` and `turn`, but by a random amount. You use it just like a normal turtle—but it won’t go forward or turn as much as you request. The below example will have goofy go forward not-quite-100 and turn nowhere-near-90. ```python >>> pluto = World() >>> goofy = ConfusedTurtle(pluto) >>> goofy.forward(100) >>> goofy.turn(90) ``` ### Program 149: ConfusedTurtle, Which Goes and Turns a Random Amount ```python import random class ConfusedTurtle(Turtle): def forward(self, num): Turtle.forward(self, int(num * random.random())) def turn(self, num): Turtle.turn(self, int(num * random.random())) ``` ### How It Works We declare the class ConfusedTurtle to be a subclass of Turtle. We define two methods in ConfusedTurtle: forward and turn. Like any other method, they take self and whatever the method input is. In these cases, the input to both is a number, `num`. What we want to do is to call the superclass (i.e., Turtle) and have it do the normal forward and turn, but with the input multiplied by a random number. Each method’s body is only a single line, but it’s a fairly complicated line. - We have to tell Python explicitly to call Turtle’s forward. - We have to pass in self, so that the right object’s data gets used and updated. - We multiply the input num by random.random(), but we need to convert it to an integer (using int). The random number returned will be between 0 and 1 (a floating-point number), but we need an integer for forward and turn. ### 16.3.2 Using Turtles for More Turtles have a bunch of methods that allow for interesting graphical effects. For example, turtles are aware of each other. They can `turnToFace(anotherTurtle)` to change the heading so that the turtle is “facing” another turtle (so that if keeps going forward, it will reach the other turtle). In the below example, we set up four turtles (al, bo, cy, and di) in four corners of a square, then repeatedly have them move toward the one on the left. The result is Figure 16.9. Section 16.3 Teaching Turtles New Tricks **FIGURE 16.9** Four turtles chasing each other. **Program 150: Chase Turtles** ```python def chase(): # Set up the four turtles earth = World() al = Turtle(earth) bo = Turtle(earth) cy = Turtle(earth) di = Turtle(earth) al.penUp() al.moveTo(10, 10) al.penDown() bo.penUp() bo.moveTo(10, 400) bo.penDown() cy.penUp() cy.moveTo(400, 10) cy.penDown() di.penUp() di.moveTo(400, 400) ``` This code sets up four turtles and makes them move in a circular pattern, chasing each other. di.penDown() # Now, chase for 300 steps for i in range(0,300): chaseTurtle(al,cy) chaseTurtle(cy,di) chaseTurtle(di,bo) chaseTurtle(bo,al) def chaseTurtle(t1,t2): t1.turnToFace(t2) t1.forward(4) How It Works The main function here is chase(). The first few lines create a world and the four turtles, and place each of them in the corners (10, 10), (10, 400), (400, 400), and (400, 10). For 300 steps (a relatively arbitrary number), each turtle is told to “chase” (chaseTurtle) the one next to it, clockwise. So, the turtle that starts at (10, 10) (al) is told to chase the turtle that starts at (10, 400) (cy). To chase means that the first turtle turns to face the second turtle, then moves forward four steps. (Try different values—we liked the visual effect of 4 the most.) Eventually, the turtles spiral in to the center. These functions are valuable for creating simulations. Imagine that we had brown turtles to act as deer, and gray turtles to act as wolves. Wolves would turnToFace deer when they saw them, and chase them. To run away, deer might turnToFace an oncoming wolf, then turn 180 and run away. Simulations are among the most powerful and insight-providing uses of computers. Computer Science Idea: Parameters Work a Bit Differently with Objects When you call a function and pass in a number as an input, the parameter variable (the local variable that accepts the input) essentially gets a copy of the number. Changing the local variable does not change the input variable. Look at the function chaseTurtle. When we call the function with chaseTurtle(al,cy), we do change the position and heading of the turtle whose name is al. Why is it so different? It isn’t really. The variable al doesn’t actually hold a turtle—it holds a reference to a turtle. Think of it as an address (in memory) of where the turtle object can be found. If you make a copy of an address, the address still references the same place. The same turtle is being manipulated inside and outside the function. We still can’t make al reference a new object from within a function like chaseTurtle. We can only change the object that al references. Turtles also know how to drop pictures. When a turtle drops a picture, the turtle stays at the upper-left-hand corner of the picture—at whatever heading the turtle is facing. (See Figure 16.10). >>> # I chose Barbara.jpg for this >>> p=makePicture(pickAFile()) >>> # Notice that we make the World and Turtle here >>> earth=World() >>> turtle = Turtle(earth) >>> turtle.drop(p) FIGURE 16.10 Dropping a picture on a world. Turtles can also be placed on pictures, as well as World instances. When you put a turtle on a picture, its body doesn’t show up by default (though you can make it visible) so that it doesn’t mess up the picture. The pen is down, and you can still draw. Putting a turtle on a picture means that we can create interesting graphics on top of existing pictures or use existing pictures in your turtle manipulations. One of our favorite techniques is spinning a picture: Have the turtle move a little, turn a little, drop a copy of the picture, then keep going. Here’s an example Figure 16.11. Below is the code that made the picture. We called it with the same picture of Barb from the previous example, show(spinAPicture(p)). Program 151: Spinning a Picture by Dropping It from a Turning Turtle ```python def spinAPicture(apic): canvas = makeEmptyPicture(640,480) ted = Turtle(canvas) for i in range(0,360): ``` ``` Chapter 16 Object-Oriented Programming ted.drop(apic) ted.forward(10) ted.turn(20) return canvas FIGURE 16.11 Dropping a picture on a picture, while moving and turning. 16.4 AN OBJECT-ORIENTED SLIDE SHOW Let’s use object-oriented techniques to build a slide show. Let’s say that we want to show a picture, then play a corresponding sound and wait until the sound is done before going on to the next picture. We’ll use the function (mentioned many chapters ago) blockingPlay(), which plays a sound and waits for it to finish before executing the next statement. Program 152: Slide Show As One Big Function def playSlideShow(): pic = makePicture(getMediaPath("barbara.jpg")) sound = makeSound(getMediaPath("bassoon-c4.wav")) show(pic) bmediumPlay(sound) pic = makePicture(getMediaPath("beach.jpg")) sound = makeSound(getMediaPath("bassoon-e4.wav")) show(pic) bmediumPlay(sound) pic = makePicture(getMediaPath("church.jpg")) sound = makeSound(getMediaPath("bassoon-g4.wav")) show(pic) bmediumPlay(sound) pic = makePicture(getMediaPath("jungle2.jpg")) sound = makeSound(getMediaPath("bassoon-c4.wav")) show(pic) bmediumPlay(sound) This isn’t a very good program from any perspective. From a procedural program- ing perspective, there’s an awful lot of duplicated code here. It would be nice to get rid of it. From an object-oriented programming perspective, we should have slide objects. As we mentioned, objects have two parts. Objects know things—these become instance variables. Objects can do things—these become methods. We’re going to access both of these using dot notation. So what does a slide know? It knows its picture and its sound. What can a slide do? It can show itself, by showing its picture and playing its sound. To define a slide object in Python (and many other object-oriented programming languages, including Java and C++), we must define a Slide class. We have already seen a couple of class definitions. Let’s go through it again, slowly, building a class from scratch. As we have already seen, a class defines the instance variables and methods for a set of objects—that is, what each object of that class knows and can do. Each object of the class is an instance of the class. We’ll make multiple slides by making multiple instances of the Slide class. This is aggregation: collections of objects, just as our bodies might make multiple kidney cells or multiple heart cells, each of which knows how to do certain kinds of tasks. To create a class in Python, we start with: class Slide: What comes after this, indented, are the methods for creating new slides and playing slides. Let’s add a show() method to our Slide class. class Slide: def show(self): show(self.picture) mediumPlay(self.sound) To create new instances, we call the class name like a function. We can define new instance variables by simply assigning them. So here is how to create a slide and give it a picture and sound. >>> slide1=Slide() >>> slide1.picture = makePicture(getMediaPath("barbara.jpg")) >>> slide1.sound = makeSound(getMediaPath("bassoon-c4.wav")) >>> slide1.show() The slide1.show() function shows the picture and plays the sound. What is this self stuff? When we execute object.method(), Python finds the method in the object's class, then calls it, using the instance object as an input. It's Python style to name this input variable self (because it is the object itself). Since we have the object in the variable self, we can then access its picture and sound by saying self.picture and self.sound. But this is still pretty hard to use if we have to set up all the variables from the Command Area. How could we make it easier? What if we could pass in the sound and picture for the slides as inputs to the Slide class, as if the class were a real function? We can do this by defining something called a constructor. To create new instances with some inputs, we must define a function named __init__. That's "underscore-underscore-i-n-i-t-underscore-underscore." It's the predefined name in Python for a method that initializes new objects. Our __init__ method needs three inputs: the instance itself (because all methods get that), a picture, and a sound. **Program 153: A Slide Class** class Slide: def __init__(self, pictureFile, soundFile): self.picture = makePicture(pictureFile) self.sound = makeSound(soundFile) def show(self): show(self.picture) blockingPlay(self.sound) We can use our Slide class to define a slide show like this. **Program 154: Playing a Slide Show, Using Our Slide Class** def playSlideShow2(): pictF = getMediaPath("barbara.jpg") soundF = getMediaPath("bassoon-c4.wav") slide1 = Slide(pictF,soundF) pictF = getMediaPath("beach.jpg") soundF = getMediaPath("bassoon-e4.wav") slide2 = Slide(pictF,soundF) pictF = getMediaPath("church.jpg") soundF = getMediaPath("bassoon-g4.wav") slide3 = Slide(pictF,soundF) pictF = getMediaPath("jungle2.jpg") soundF = getMediaPath("bassoon-c4.wav") One of the features of Python that make it so powerful is that we can mix object-oriented and functional programming styles. Slides are now objects that can easily be stored in lists, like any other kind of Python object. Here’s an example of the same slide show where we use `map` to show the slide show. Program 155: Slide Show, In Objects and Functions ```python def showSlide(aSlide): aSlide.show() def playSlideShow3(): pictF = getMediaPath("barbara.jpg") soundF = getMediaPath("bassoon-c4.wav") slide1 = Slide(pictF,soundF) pictF = getMediaPath("beach.jpg") soundF = getMediaPath("bassoon-e4.wav") slide2 = Slide(pictF,soundF) pictF = getMediaPath("church.jpg") soundF = getMediaPath("bassoon-g4.wav") slide3 = Slide(pictF,soundF) pictF = getMediaPath("jungle2.jpg") soundF = getMediaPath("bassoon-c4.wav") slide4 = Slide(pictF,soundF) map(showSlide,[slide1,slide2,slide3,slide4]) ``` Is the object-oriented version of the slide show easier to write? It certainly has less replication of code. It features encapsulation in that the data and behavior of the object are defined in one and only one place, so that any change to one is easily changed in the other. Being able to use lots of objects (like lists of objects) is called aggregation. This is a powerful idea. We don’t always have to define new classes—we can often use the powerful structures we know, like lists with existing objects, to great impact. 16.4.1 Making the Slide Class More Object-Oriented What happens if we need to change the picture or sound of some class? We can. We can simply change the picture or sound instance variables. But if you think about it, you realize that that’s not very safe. What if someone else used the slide show and decided to store movies in the picture variable? It could easily be made to work, but now we have two different uses for the same variable. What you really want is to have a method that handles getting or setting a variable. And if it becomes an issue that the wrong data is being stored in the variable, the set-the-variable method can be changed to check the value, to make sure it’s the right type and valid, before setting the variable. In order for this to work, everyone that uses the class has to agree to use the methods for getting and setting the instance variables, and not directly mess with the instance variables. In languages such as Java, one can ask the compiler to keep instance variables private and do not allow any uses that directly touch the instance variables. In Python, the best we can do is to create the setting-and-getting methods and encourage their use only. We call those methods (simply enough) setters and getters. Here is a version of the class where we define setters and getters for the two instance variables—as you can see, they are quite simple. Notice how we change the show and even the __init__ methods so that, as much as possible, we use the setters and getters instead of direct access of the instance variables. This is the style of programming that Adele Goldberg meant when she talked about, “Ask, don’t touch.” Here is something cool about our revised class. We don’t have to change anything in our playSlideShow3 function. It just works, still, even though we made several changes to how the class Slide works. We say that the function playSlideShow3 and the class Slide are loosely coupled. They work together, in well-defined ways, but the inner workings of either can change without impacting the other. 16.5 **OBJECT-ORIENTED MEDIA** As we said, we have been using objects throughout this book. We have been creating `Picture` objects with the function `makePicture`. We can also create a picture using the normal Python constructor. ```python >>> pic = Picture(getMediaPath("barbara.jpg")) ``` Here's how the function `show()` is defined. You can ignore `raise` and `__class__`. The key point is that the function is simply executing the existing picture method `show`. ```python def show(picture): if not picture.__class__ == Picture: print "show(picture): Input is not a picture" raise ValueError picture.show() ``` We could have other classes that also know how to `show`. Objects can have their own methods with names that other objects also use. Much more powerful is that each of these methods with the same name can achieve the same goal, but in different ways. We defined a class for slides, and it knew how to `show`. For both slides and pictures, the method `show()` says, “Show the object.” But what’s really happening is different in each case: pictures just show themselves, but slides show their pictures and play their sounds. **Computer Science Idea: Polymorphism** When the same name can be used to invoke different methods that achieve the same goal, we call that **polymorphism**. It’s very powerful for the programmer. You simply tell an object `show()`—you don’t have to care exactly what method is being executed and you don’t even have to know exactly what object it is that you’re telling the object to show. You the programmer simply specify your goal, to show the object. The object-oriented program handles the rest. There are several examples of polymorphism built into the methods that we’re using in JES. For example, both pixels and colors understand the methods `setRed`, `getRed`, `setBlue`, `getBlue`, `setGreen`, and `getGreen`. This allows us to manipulate the colors of the pixels without pulling out the color objects separately. We could have defined the functions to take both kinds of inputs or to provide different functions for each kind of input, but both of those options get confusing. It’s easy to do with methods. ```python >>> pic = Picture(getMediaPath("barbara.jpg")) >>> pic.show() ``` 1Recall that JES is an environment for programming in Jython, which is a specific kind of Python. The media supports are part of what JES provides—they’re not part of the core of Python. Another example is the method `writeTo()`. The method `writeTo(filename)` is defined for both pictures and sounds. Did you ever confuse `writePictureTo()` and `writeSoundTo()`? Isn’t it easier to just always write `writeTo(filename)`? That’s why that method is named the same in both classes, and why polymorphism is so powerful. (You may be wondering why we didn’t introduce this in the first place. Were you ready in Chapter 2 to talk about dot notation and polymorphic methods?) Overall, there are actually many more methods defined in JES than functions. More specifically, there are a bunch of methods for drawing on pictures that aren’t available as functions. - As you would expect, pictures understand `pic.addRect(color,x,y,width,height)`, `pic.addRectFilled(color,x,y,width,height)`, `pic.addOval(color,x,y,width,height)`, and `pic.addOvalFilled(color,x,y,width,height)`. See Figure 16.12 for examples of rectangle methods drawn from the following example. ```python color = pixel.getColor() print color.getRed() >>> pic = Picture(getMediaPath("640 x480 . jpg")) >>> pic.addRectFilled(orange,10,10,100,100) >>> pic.addRect(blue,200,200,50,50) >>> pic.show() >>> pic.writeTo("newrects.jpg") ``` ![Figure 16.12](image.png) **FIGURE 16.12** Examples of rectangle methods. See Figure 16.13 for examples of ovals drawn from the following example. >>> pic = Picture(getMediaPath("640x480.jpg")) >>> pic.addOval(green,200,200,50,50) >>> pic.addOvalFilled(magenta,10,10,100,100) >>> pic.show() >>> pic.writeTo("ovals.jpg") - Pictures also understand *arcs*. Arcs are literally parts of a circle. The two methods are `pic.addArc(color,x,y,width,height,startAngle,arcAngle)` and `pic.addArcFilled(color,x,y,width,height,startAngle,arcAngle)`. They draw arcs for `arcAngle` degrees, where `startAngle` is the starting point. 0 degrees is at 3 o’clock on the clock face. A positive arc is counter clockwise and negative is clockwise. The center of the circle is the middle of the rectangle defined by `(x, y)` with the given width and height. - We can also now draw colored lines, using `pic.addLine(color,x1,y1,x2,y2)`. See Figure 16.14 for examples of arcs and lines drawn from the following example. ```python >>> pic = Picture(getMediaPath("640x480.jpg")) >>> pic.addArc(red,10,10,100,100,5,45) >>> pic.show() >>> pic.addArcFilled(green,200,100,200,100,1,90) >>> pic.repaint() >>> pic.addLine(blue,400,400,600,400) >>> pic.repaint() >>> pic.writeTo("arcs-lines.jpg") ``` Text in Java can have styles, but these are limited to make sure that all platforms can replicate them. \texttt{pic.addText(color,x,y,string)} is the one we would expect to see. There is also \texttt{pic.addTextWithStyle(color,x,y,string,style)}, which takes a style created from \texttt{makeStyle(font,emphasis,size)}. The font is \texttt{sansSerif}, \texttt{serif}, or \texttt{mono}. The emphasis is \texttt{italic}, \texttt{bold}, or \texttt{plain}, or sum them to get combinations (e.g., \texttt{italic+bold}, \texttt{size} is a point size). See Figure 16.15 for examples of text drawn from the following example. >>> pic = Picture (getMediaPath("640x480.jpg")) >>> pic.addText(red,10,100,"This is a red string!") >>> pic.addTextWithStyle(green,10,200,"This is a bold, italic, green, large string", makeStyle(sansSerif, bold+italic,18)) >>> pic.addTextWithStyle(blue,10,300,"This is a blue, larger, italic-only, serif string", makeStyle(serif, italic,24)) >>> pic.writeTo("text.jpg") The older media functions that we wrote can be rewritten in method form. We will need to create a subclass of the Picture class and add the method to that class. Program 157: Making a Sunset Using a Method class MyPicture(Picture): def makeSunset(self): for p in getPixels(self): p.setBlue(int(p.getBlue() * 0.7)) p.setGreen(int(p.getGreen() * 0.7)) This can be used like this. >>> pict = MyPicture(getMediaPath("beach.jpg")) >>> pict.explore() >>> pict.makeSunset() >>> pict.explore() We can also create new subclasses of the Sound class and new methods to work on sound objects. The methods for accessing sound sample values are getSampleValue() and getSampleValueAt(index). Program 158: Reverse a Sound with a Method class MySound(Sound): def reverse(self): target = Sound(self.getLength()) sourceIndex = self.getLength() - 1 for targetIndex in range(0,target.getLength()): sourceValue = self.getSampleValueAt(sourceIndex) target.setSampleValueAt(targetIndex,sourceValue) sourceIndex = sourceIndex - 1 return target This can be used like this. >>> sound = MySound(getMediaPath("always.wav")) >>> sound.explore() >>> target = sound.reverse() >>> target.explore()} 16.6 JOE THE BOX The earliest example used to teach object-oriented programming was developed by Adele Goldberg and Alan Kay. It’s called Joe the Box. There is nothing new in this example, but it does provide a different example from another perspective, so it’s worth reviewing. Imagine that you have a class Box like the one below: ```python class Box: def __init__(self): self.setDefaultColor() self.size = 10 self.position = (10, 10) def setDefaultColor(self): self.color = red def draw(self, canvas): addRectFilled(canvas, self.position[0], self.position[1], self.size, self.size, self.color) ``` What will you see if you execute the following code? ```python >>> canvas = makeEmptyPicture(400, 200) >>> joe = Box() >>> joe.draw(canvas) >>> show(canvas) ``` Let’s trace it out. - Obviously, the first line just creates a white canvas that is 400 pixels wide and 200 pixels high. - When we create `joe`, the `__init__` method is called. The method `setDefaultColor` is called on `joe`, so he gets a default color of red. When `self.color = red` is executed, the instance variable `color` is created for `joe` and gets a value of red. We return to `__init__`, where `joe` is given a size of 10 and a position of (10,10) (size and position both become new instance variables). - When `joe` is asked to draw himself on the canvas, he’s drawn as a red, filled rectangle (`addRectFilled`), at x position 10 and y position 10, with a size of 10 pixels on each side. We could add a method to Box that allows us to make `joe` change his size. ```python class Box: def __init__(self): self.setDefaultColor() self.size = 10 self.position = (10, 10) def setDefaultColor(self): self.color = red def draw(self, canvas): addRectFilled(canvas, self.position[0], self.position[1], self.size, self.size, self.color) ``` Section 16.7 Why Objects? One role for objects is to reduce the number of names that you have to remember. Through polymorphism, you only have to remember the name and the goal, not all the various global functions. More importantly, though, objects encapsulate data and behavior. Imagine that you wanted to change the name of an instance variable and then all the methods that use the variable. That’s a lot to change. What if you miss one? Changing them all in one place, together, is useful. Objects reduce the coupling between program components, that is, how dependent they are on each other. Imagine that you have several functions that all use the same global variable. If you change one function so that it stores something slightly different in that variable, all the other functions must also be updated or they won’t work. That’s called tight coupling. Objects that only use methods on each other (no direct access to instance variables) are more loosely coupled. The access is well-defined and easily changed in only one place. Changes in one object do not demand changes in other objects. An advantage of loose coupling is ease in developing in team contexts. You can have different people working on different classes. As long as everyone agrees on how access will work through methods, nobody has to know how anybody else’s methods work. Object-oriented programming can be particularly useful when working on teams. Aggregation is also a significant benefit of object systems. You can have lots of objects doing useful things. Want more? Just create them! Python’s objects are similar to the objects of many languages. One significant difference is in access to instance variables, though. In Python, any object can access and manipulate any other object’s instance variables. That’s not true in languages like Java, C++, or Smalltalk. In these other languages, access to instance variables from other objects is limited and can even be eliminated entirely—then you can only access objects’ instance variables through getter and setter methods. Another big part of object systems is inheritance. As we saw with our turtle and box examples, we can declare one class (parent class) to be inherited by another class (child class) (also called superclass and subclass). Inheritance provides for instant polymorphism—the instances of the child automatically have all the data and behavior of the parent class. The child can then add more behavior and data to what the parent class had. This is called making the child a specialization of the parent class. For example, a 3-D rectangle instance might know and do everything that a rectangle instance does by saying class Rectangle3D(Rectangle). Inheritance gets a lot of press in the object-oriented world but it’s a trade-off. It reduces even further the duplication of code, which is a good thing. In actual practice, inheritance isn’t used as much as other advantages of object-oriented programming (like aggregation and encapsulation), and it can be confusing. Whose method is being executed when you type the below? It’s invisible from here, and if it’s wrong, it can be hard to figure out where it’s wrong. ```python myBox = Rectangle3D() myBox.draw() ``` So when should you use objects? You should define your own object classes when you have data and behavior that you want to define for all instances of the group (e.g., pictures and sounds). You should use existing objects all the time. They’re very powerful. If you’re not comfortable with dot notation and the ideas of objects, you can stick with functions—they work just fine. Objects just give you a leg up on more complex systems. **PROGRAMMING SUMMARY** Some of the programming pieces that we met in this chapter. **OBJECT-ORIENTED PROGRAMMING** <table> <thead> <tr> <th>class</th> <th>Lets you define a class. The keyword <code>class</code> takes a class name and an optional superclass in parentheses, ending with a colon. Methods for the class follow, indented within the class block.</th> </tr> </thead> <tbody> <tr> <td><code>__init__</code></td> <td>The name of the method called on an object when it’s first created. It’s not required to have one.</td> </tr> </tbody> </table> **GRAPHICS METHODS** | addRect, addRectFilled | The methods in the `Picture` class for drawing rectangles and filled rectangles. | | addOval, addOvalFilled | The methods in the `Picture` class for drawing ovals and filled ovals. | | addArc, addArcFilled | The methods in the `Picture` class for drawing arcs and filled arcs. | | addText, `addText-WithStyle` | The methods in the `Picture` class for drawing text and text with style elements (like boldface or sans serif). | | addLine | The method in the `Picture` class for drawing a line. | | `getRed`, `getGreen`, `getBlue` | The methods for both `Pixel` and `Color` objects for getting the red, green, and blue color components. | | `setRed`, `setGreen`, `setBlue` | The methods for both `Pixel` and `Color` objects for setting the red, green, and blue color components. | **PROBLEMS** 16.1 Answer the following questions. - What is the difference between an instance and a class? Chapter 16 Object-Oriented Programming - How are functions and methods different? - How is object-oriented programming different from procedural programming? - What is polymorphism? - What is encapsulation? - What is aggregation? - What is a constructor? - How did biological cells influence the development of the idea of objects? 16.2 Answer the following questions. - What is inheritance? - What is a superclass? - What is a subclass? - What methods does a child class inherit? - What instance variables (fields) does a child class inherit? 16.3 Add a method to the Turtle class to draw an equilateral triangle. 16.4 Add a method to the Turtle class to draw a rectangle given a width and height. 16.5 Add a method to the Turtle class to draw a simple house. It can have a rectangle for the house and an equilateral triangle as the roof. 16.6 Add a method to the Turtle class to draw a street of houses. 16.7 Add a method to the Turtle class to draw a letter. 16.8 Add a method to the Turtle class to draw your initials. 16.9 Create a movie with several turtles moving in each frame. 16.10 Add another constructor to the Slide class that takes just a picture filename. 16.11 Create a SlideShow class that holds a list of slides and shows each slide one at a time. 16.12 Create a CartoonPanel class that takes an array of Pictures and displays the pictures from left to right. It should also have a title and author and display the title at the top left edge and the author at the top right edge. 16.13 Create a Student class. Each student should have a name and a picture. Add a method, show, that shows the picture for the student. 16.14 Add a field to the SlideShow class to hold the title and modify the show method to first show a blank picture with the title on it. 16.15 Create a Playlist class that takes a list of sounds and play them one at a time. 16.16 Use the methods in the Picture class to draw a smiling face. 16.17 Use the methods in the Picture class to draw a rainbow. 16.18 Rewrite the mirror functions as methods in the MyPicture class. 16.19 Make some modifications to Joe the Box. • Add a method to Box named setColor that takes a color as input, then makes the input color the new color for the box. (Maybe setColor should call setColor?) • Add a method to Box named setSize that takes a number as input, then makes the input number the new size for the box. • Add a method to Box named setPosition that takes a list or tuple as a parameter, then makes that input the new position for the box. • Change __init__ so that it uses setSize and setPosition rather than simply setting the instance variables. 16.20 Finish the Joe the Box example. (a) Implement grow and move. The method move takes as input a relative distance like (−10, 15) to move 10 pixels left (x position) and 15 pixels down (y position). (b) Draw patterns by creating joe and jane, then move a little and draw, grow a little and draw, then repaint the new canvas. 16.21 Create a movie with boxes growing and shrinking in it. TO DIG DEEPER There is lots more to do with Python in exploring procedural, functional, and object-oriented programming styles. Mark recommends the books by Mark Lutz (especially [30]) and Richard Hightower [24] as nice introductions to the deeper realms of Python. You might also explore some of the tutorials at the Python Web site (http://www.python.org).
{"Source-Url": "http://ael.gatech.edu:80/cs6452f13/files/2013/08/Ch16-draft.pdf", "len_cl100k_base": 10943, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 65184, "total-output-tokens": 12765, "length": "2e13", "weborganizer": {"__label__adult": 0.0003938674926757813, "__label__art_design": 0.0003285408020019531, "__label__crime_law": 0.0001996755599975586, "__label__education_jobs": 0.0020389556884765625, "__label__entertainment": 6.115436553955078e-05, "__label__fashion_beauty": 0.00013184547424316406, "__label__finance_business": 0.00016236305236816406, "__label__food_dining": 0.0003726482391357422, "__label__games": 0.0004496574401855469, "__label__hardware": 0.0006208419799804688, "__label__health": 0.0003292560577392578, "__label__history": 0.00022161006927490232, "__label__home_hobbies": 0.00012695789337158203, "__label__industrial": 0.0003380775451660156, "__label__literature": 0.00028443336486816406, "__label__politics": 0.00016367435455322266, "__label__religion": 0.000492095947265625, "__label__science_tech": 0.003093719482421875, "__label__social_life": 0.00010031461715698242, "__label__software": 0.003812789916992187, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00031375885009765625, "__label__transportation": 0.00047397613525390625, "__label__travel": 0.0001989603042602539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46098, 0.0306]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46098, 0.93261]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46098, 0.90194]], "google_gemma-3-12b-it_contains_pii": [[0, 1057, false], [1057, 4154, null], [4154, 6952, null], [6952, 7994, null], [7994, 8115, null], [8115, 9289, null], [9289, 10613, null], [10613, 11961, null], [11961, 12992, null], [12992, 13825, null], [13825, 15774, null], [15774, 16366, null], [16366, 18865, null], [18865, 19886, null], [19886, 20623, null], [20623, 22831, null], [22831, 24933, null], [24933, 26854, null], [26854, 28474, null], [28474, 30931, null], [30931, 32217, null], [32217, 33415, null], [33415, 34034, null], [34034, 35738, null], [35738, 37658, null], [37658, 37876, null], [37876, 41222, null], [41222, 42716, null], [42716, 44824, null], [44824, 46098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1057, true], [1057, 4154, null], [4154, 6952, null], [6952, 7994, null], [7994, 8115, null], [8115, 9289, null], [9289, 10613, null], [10613, 11961, null], [11961, 12992, null], [12992, 13825, null], [13825, 15774, null], [15774, 16366, null], [16366, 18865, null], [18865, 19886, null], [19886, 20623, null], [20623, 22831, null], [22831, 24933, null], [24933, 26854, null], [26854, 28474, null], [28474, 30931, null], [30931, 32217, null], [32217, 33415, null], [33415, 34034, null], [34034, 35738, null], [35738, 37658, null], [37658, 37876, null], [37876, 41222, null], [41222, 42716, null], [42716, 44824, null], [44824, 46098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 46098, null]], "pdf_page_numbers": [[0, 1057, 1], [1057, 4154, 2], [4154, 6952, 3], [6952, 7994, 4], [7994, 8115, 5], [8115, 9289, 6], [9289, 10613, 7], [10613, 11961, 8], [11961, 12992, 9], [12992, 13825, 10], [13825, 15774, 11], [15774, 16366, 12], [16366, 18865, 13], [18865, 19886, 14], [19886, 20623, 15], [20623, 22831, 16], [22831, 24933, 17], [24933, 26854, 18], [26854, 28474, 19], [28474, 30931, 20], [30931, 32217, 21], [32217, 33415, 22], [33415, 34034, 23], [34034, 35738, 24], [35738, 37658, 25], [37658, 37876, 26], [37876, 41222, 27], [41222, 42716, 28], [42716, 44824, 29], [44824, 46098, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46098, 0.01887]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
73e6057c0f84235f28abc0ae4cdc042d86e8eb19
Factoring One last point: we started off this book by introducing another famously hard search problem: FACTORING, the task of finding all prime factors of a given integer. But the difficulty of FACTORING is of a different nature than that of the other hard search problems we have just seen. For example, nobody believes that FACTORING is NP-complete. One major difference is that, in the case of FACTORING, the definition does not contain the now familiar clause “or report that none exists.” A number can always be factored into primes. Another difference (possibly not completely unrelated) is this: as we shall see in Chapter 10, FACTORING succumbs to the power of quantum computation—while SAT, TSP and the other NP-complete problems do not seem to. 8.3 The reductions We shall now see that the search problems of Section 8.1 can be reduced to one another as depicted in Figure 8.7. As a consequence, they are all \( \text{NP} \)-complete. Before we tackle the specific reductions in the tree, let’s warm up by relating two versions of the Rudrata problem. **RUDRATA \((s, t)\)-PATH \(\longrightarrow\) RUDRATA CYCLE** Recall the RUDRATA CYCLE problem: given a graph, is there a cycle that passes through each vertex exactly once? We can also formulate the closely related RUDRATA \((s, t)\)-PATH problem, in which two vertices \(s\) and \(t\) are specified, and we want a path starting at \(s\) and ending at \(t\) that goes through each vertex exactly once. Is it possible that RUDRATA CYCLE is easier than RUDRATA \((s, t)\)-PATH? We will show by a reduction that the answer is no. The reduction maps an instance \((G = (V, E), s, t)\) of RUDRATA \((s, t)\)-PATH into an instance \(G' = (V', E')\) of RUDRATA CYCLE as follows: \(G'\) is simply \(G\) with an additional vertex \(x\) and two new edges \(\{s, x\}\) and \(\{x, t\}\). For instance: \[G\] \[G'\] \[s\] \[s\] \[t\] \[t\] \[x\] So $V' = V \cup \{x\}$, and $E' = E \cup \{\{s, x\}, \{x, t\}\}$. How do we recover a Rudrata $(s, t)$-path in $G$ given any Rudrata cycle in $G'$? Easy, we just delete the edges $\{s, x\}$ and $\{x, t\}$ from the cycle. **RUDRATA (s, t)-PATH** <table> <thead> <tr> <th>Instance: $G = (V, E)$ nodes $s, t$</th> <th>Add node $x$ and edges ${s, x}, {x, t}$</th> <th>$G' = (V', E')$</th> <th>RUDRATA CYCLE</th> <th>Solution: cycle</th> <th>Delete edges ${s, x}, {x, t}$</th> <th>Solution: path</th> </tr> </thead> <tbody> <tr> <td>No solution</td> <td></td> <td></td> <td>No solution</td> <td></td> <td>No solution</td> <td></td> </tr> </tbody> </table> To confirm the validity of this reduction, we have to show that it works in the case of either outcome depicted. 1. When the instance of RUDRATA CYCLE has a solution. Since the new vertex $x$ has only two neighbors, $s$ and $t$, any Rudrata cycle in $G'$ must consecutively traverse the edges $\{t, x\}$ and $\{x, s\}$. The rest of the cycle then traverses every other vertex en route from $s$ to $t$. Thus deleting the two edges $\{t, x\}$ and $\{x, s\}$ from the Rudrata cycle gives a Rudrata path from $s$ to $t$ in the original graph $G$. 2. When the instance of RUDRATA CYCLE does not have a solution. In this case we must show that the original instance of RUDRATA (s, t)-PATH cannot have a solution either. It is usually easier to prove the contrapositive, that is, to show that if there is a Rudrata $(s, t)$-path in $G$, then there is also a Rudrata cycle in $G'$. But this is easy: just add the two edges $\{t, x\}$ and $\{x, s\}$ to the Rudrata path to close the cycle. One last detail, crucial but typically easy to check, is that the pre- and postprocessing functions take time polynomial in the size of the instance $(G, s, t)$. It is also possible to go in the other direction and reduce RUDRATA CYCLE to RUDRATA (s, t)-PATH. Together, these reductions demonstrate that the two Rudrata variants are in essence the same problem—which is not too surprising, given that their descriptions are almost the same. But most of the other reductions we will see are between pairs of problems that, on the face of it, look quite different. To show that they are essentially the same, our reductions will have to cleverly translate between them. **3SAT—INDEPENDENT SET** One can hardly think of two more different problems. In 3SAT the input is a set of clauses, each with three or fewer literals, for example $$(\overline{x} \lor y \lor z) (x \lor \overline{y} \lor z) (x \lor y \lor \overline{z}) (\overline{x} \lor \overline{y}),$$ and the aim is to find a satisfying truth assignment. In INDEPENDENT SET the input is a graph and a number $g$, and the problem is to find a set of $g$ pairwise non-adjacent vertices. We must somehow relate Boolean logic with graphs! Let us think. To form a satisfying truth assignment we must pick one literal from each clause and give it the value \texttt{true}. But our choices must be consistent: if we choose \( \overline{x} \) in one clause, we cannot choose \( x \) in another. Any consistent choice of literals, one from each clause, specifies a truth assignment (variables for which neither literal has been chosen can take on either value). So, let us represent a clause, say \((x \lor \overline{y} \lor z)\), by a triangle, with vertices labeled \( x, y, z \). Why triangle? Because a triangle has its three vertices maximally connected, and thus forces us to pick only one of them for the independent set. Repeat this construction for all clauses—a clause with two literals will be represented simply by an edge joining the literals. (A clause with one literal is silly and can be removed in a preprocessing step, since the value of the variable is determined.) In the resulting graph, an independent set has to pick at most one literal from each group (clause). To force exactly one choice from each clause, take the goal \( g \) to be the number of clauses; in our example, \( g = 4 \). All that is missing now is a way to prevent us from choosing opposite literals (that is, both \( x \) and \( \overline{x} \)) in different clauses. But this is easy: put an edge between any two vertices that represent opposite literals. The resulting graph for our example is shown in Figure 8.8. Let’s recap the construction. Given an instance \( I \) of 3SAT, we create an instance \((G, g)\) of INDEPENDENT SET as follows. - Graph \( G \) has a triangle for each clause (or just an edge, if the clause has two literals), with vertices labeled by the clause’s literals, and has additional edges between any two vertices that represent opposite literals. - The goal \( g \) is set to the number of clauses. Clearly, this construction takes polynomial time. However, recall that for a reduction we do not just need an efficient way to map instances of the first problem to instances of the second (the function \( f \) in the diagram on page 259), but also a way to reconstruct a solution to the first instance from any solution of the second (the function \( h \)). As always, there are two things to show. 1. Given an independent set \( S \) of \( g \) vertices in \( G \), it is possible to efficiently recover a satisfying truth assignment to \( I \). For any variable $x$, the set $S$ cannot contain vertices labeled both $x$ and $\overline{x}$, because any such pair of vertices is connected by an edge. So assign $x$ a value of true if $S$ contains a vertex labeled $x$, and a value of false if $S$ contains a vertex labeled $\overline{x}$ (if $S$ contains neither, then assign either value to $x$). Since $S$ has $g$ vertices, it must have one vertex per clause; this truth assignment satisfies those particular literals, and thus satisfies all clauses. 2. If graph $G$ has no independent set of size $g$, then the Boolean formula $I$ is unsatisfiable. It is usually cleaner to prove the contrapositive, that if $I$ has a satisfying assignment then $G$ has an independent set of size $g$. This is easy: for each clause, pick any literal whose value under the satisfying assignment is true (there must be at least one such literal), and add the corresponding vertex to $S$. Do you see why set $S$ must be independent? **SAT —— 3SAT** This is an interesting and common kind of reduction, from a problem to a special case of itself. We want to show that the problem remains hard even if its inputs are restricted somehow—in the present case, even if all clauses are restricted to have $\leq 3$ literals. Such reductions modify the given instance so as to get rid of the forbidden feature (clauses with $\geq 4$ literals) while keeping the instance essentially the same, in that we can read off a solution to the original instance from any solution of the modified one. Here’s the trick for reducing SAT to 3SAT: given an instance $I$ of SAT, use exactly the same instance for 3SAT, except that any clause with more than three literals, $(a_1 \lor a_2 \lor \cdots \lor a_k)$ (where the $a_i$’s are literals and $k > 3$), is replaced by a set of clauses,$$(a_1 \lor a_2 \lor y_1) (\overline{y}_1 \lor a_3 \lor y_2) (\overline{y}_2 \lor a_4 \lor y_3) \cdots (\overline{y}_{k-3} \lor a_{k-1} \lor a_k),$$where the $y_i$’s are new variables. Call the resulting 3SAT instance $I'$. The conversion from $I$ to $I'$ is clearly polynomial time. Why does this reduction work? $I'$ is equivalent to $I$ in terms of satisfiability, because for any assignment to the $a_i$’s,$$egin{align*} \left\{ a_1 \lor a_2 \lor \cdots \lor a_k \right\} &\iff \left\{ \begin{array}{l} \text{there is a setting of the } y_i \text{'s for which} \end{array} \right. \\ \text{is satisfied} &\iff \begin{array}{l} \text{are all satisfied} \end{array} \begin{array}{l} \left\{ (a_1 \lor a_2 \lor y_1) (\overline{y}_1 \lor a_3 \lor y_2) \cdots (\overline{y}_{k-3} \lor a_{k-1} \lor a_k) \right\} \end{array} \end{align*}$$ To see this, first suppose that the clauses on the right are all satisfied. Then at least one of the literals $a_1, \ldots, a_k$ must be true—otherwise $y_1$ would have to be true, which would in turn force $y_2$ to be true, and so on, eventually falsifying the last clause. But this means $(a_1 \lor a_2 \lor \cdots \lor a_k)$ is also satisfied. Conversely, if $(a_1 \lor a_2 \lor \cdots \lor a_k)$ is satisfied, then some $a_i$ must be true. Set $y_1, \ldots, y_{i-2}$ to true and the rest to false. This ensures that the clauses on the right are all satisfied. Thus, any instance of SAT can be transformed into an equivalent instance of 3SAT. In fact, 3SAT remains hard even under the further restriction that no variable appears in more than three clauses. To show this, we must somehow get rid of any variable that appears too many times. Here’s the reduction from 3SAT to its constrained version. Suppose that in the 3SAT instance, variable \( x \) appears in \( k > 3 \) clauses. Then replace its first appearance by \( x_1 \), its second appearance by \( x_2 \), and so on, replacing each of its \( k \) appearances by a different new variable. Finally, add the clauses \[ (x_1 \lor x_2) (x_2 \lor x_3) \cdots (x_k \lor x_1). \] And repeat for every variable that appears more than three times. It is easy to see that in the new formula no variable appears more than three times (and in fact, no literal appears more than twice). Furthermore, the extra clauses involving \( x_1, x_2, \ldots, x_k \) constrain these variables to have the same value; do you see why? Hence the original instance of 3SAT is satisfiable if and only if the constrained instance is satisfiable. **INDEPENDENT SET — VERTEX COVER** Some reductions rely on ingenuity to relate two very different problems. Others simply record the fact that one problem is a thin disguise of another. To reduce INDEPENDENT SET to VERTEX COVER we just need to notice that a set of nodes \( S \) is a vertex cover of graph \( G = (V, E) \) (that is, \( S \) touches every edge in \( E \)) if and only if the remaining nodes, \( V - S \), are an independent set of \( G \) (Figure 8.9). Therefore, to solve an instance \((G, g)\) of INDEPENDENT SET, simply look for a vertex cover of \( G \) with \(|V| - g\) nodes. If such a vertex cover exists, then take all nodes not in it. If no such vertex cover exists, then \( G \) cannot possibly have an independent set of size \( g \). **INDEPENDENT SET — CLIQUE** INDEPENDENT SET and CLIQUE are also easy to reduce to one another. Define the complement of a graph \( G = (V, E) \) to be \( \overline{G} = (V, \overline{E}) \), where \( \overline{E} \) contains precisely those unordered pairs of vertices that are not in \( E \). Then a set of nodes \( S \) is an independent set of \( G \) if and only if \( S \) is a clique of \( \overline{G} \). To paraphrase, these nodes have no edges between them in \( G \) if and only if they have all possible edges between them in \( \overline{G} \). Therefore, we can reduce INDEPENDENT SET to CLIQUE by mapping an instance \((G, g)\) of \textsc{independent set} to the corresponding instance \((G, g)\) of \textsc{clique}; the solution to both is identical. \textbf{3\textsc{sat}—3\textsc{d matching}} Again, two very different problems. We must reduce \textsc{3sat} to the problem of finding, among a set of boy-girl-pet triples, a subset that contains each boy, each girl, and each pet exactly once. In short, we must design sets of boy-girl-pet triples that somehow behave like Boolean variables and gates! Consider the following set of four triples, each represented by a triangular node joining a boy, girl, and pet: \begin{center} \begin{tikzpicture}[node distance=2cm, thick, main node/.style={circle,fill,draw}] \node[main node] (p) at (0,0) [label=above left:{$p_1$}] {}; \node[main node] (b) at (-1,-1) [label=above left:{$b_0$}] {}; \node[main node] (g) at (1,-1) [label=above right:{$g_0$}] {}; \node[main node] (p0) at (0,-2) [label=above left:{$p_0$}] {}; \node[main node] (b1) at (1,-2) [label=above left:{$b_1$}] {}; \node[main node] (g1) at (-1,-2) [label=above right:{$g_1$}] {}; \node[main node] (p2) at (0,-3) [label=above right:{$p_2$}] {}; \node[main node] (p3) at (0,-4) [label=below:{$p_3$}] {}; \draw (p) -- (b); \draw (p) -- (g); \draw (p) -- (p0); \draw (b) -- (g); \draw (b) -- (b1); \draw (b) -- (g1); \draw (g) -- (p0); \draw (g) -- (b1); \draw (g) -- (g1); \draw (p0) -- (p2); \draw (b1) -- (p2); \draw (g1) -- (p2); \draw (p2) -- (p3); \end{tikzpicture} \end{center} Suppose that the two boys \(b_0\) and \(b_1\) and the two girls \(g_0\) and \(g_1\) are not involved in any other triples. (The four pets \(p_0, \ldots, p_3\) will of course belong to other triples as well; for otherwise the instance would trivially have no solution.) Then any matching must contain either the two triples \((b_0, g_1, p_0), (b_1, g_0, p_2)\) or the two triples \((b_0, g_0, p_1), (b_1, g_1, p_3)\), because these are the only ways in which these two boys and girls can find any match. Therefore, this “gadget” has two possible states: it behaves like a Boolean variable! To then transform an instance of \textsc{3sat} to one of \textsc{3d matching}, we start by creating a copy of the preceding gadget for each variable \(x\). Call the resulting nodes \(p_{x1}, b_{x0}, g_{x1}\), and so on. The intended interpretation is that boy \(b_{x0}\) is matched with girl \(g_{x1}\) if \(x = \text{true}\), and with girl \(g_{x0}\) if \(x = \text{false}\). Next we must create triples that somehow mimic clauses. For each clause, say \(c = (x \lor y \lor z)\), introduce a new boy \(b_c\) and a new girl \(g_c\). They will be involved in three triples, one for each literal in the clause. And the pets in these triples must reflect the three ways whereby the clause can be satisfied: (1) \(x = \text{true}\), (2) \(y = \text{false}\), (3) \(z = \text{true}\). For (1), we have the triple \((b_c, g_c, p_{x1})\), where \(p_{x1}\) is the pet \(p_1\) in the gadget for \(x\). Here is why we chose \(p_1\): if \(x = \text{true}\), then \(b_{x0}\) is matched with \(g_{x1}\) and \(b_{x1}\) with \(g_{x0}\), and so pets \(p_{x0}\) and \(p_{x2}\) are taken. In which case \(b_c\) and \(g_c\) can be matched with \(p_{x1}\). But if \(x = \text{false}\), then \(p_{x1}\) and \(p_{x3}\) are taken, and so \(g_c\) and \(b_c\) cannot be accommodated this way. We do the same thing for the other two literals of the clause, which yield triples involving $b_c$ and $g_c$ with either $p_y0$ or $p_y2$ (for the negated variable $y$) and with either $p_z1$ or $p_z3$ (for variable $z$). We have to make sure that for every occurrence of a literal in a clause $c$ there is a different pet to match with $b_c$ and $g_c$. But this is easy: by an earlier reduction we can assume that no literal appears more than twice, and so each variable gadget has enough pets, two for negated occurrences and two for unnegated. The reduction now seems complete: from any matching we can recover a satisfying truth assignment by simply looking at each variable gadget and seeing with which girl $b_{x0}$ was matched. And from any satisfying truth assignment we can match the gadget corresponding to each variable $x$ so that triples $(b_{x0}, g_{x1}, p_{x0})$ and $(b_{x1}, g_{x0}, p_{x2})$ are chosen if $x = \text{true}$ and triples $(b_{x0}, g_{x0}, p_{x1})$ and $(b_{x1}, g_{x1}, p_{x3})$ are chosen if $x = \text{false}$; and for each clause $c$ match $b_c$ and $g_c$ with the pet that corresponds to one of its satisfying literals. But one last problem remains: in the matching defined at the end of the last paragraph, some pets may be left unmatched. In fact, if there are $n$ variables and $m$ clauses, then exactly $2n - m$ pets will be left unmatched (you can check that this number is sure to be positive, because we have at most three occurrences of every variable, and at least two literals in every clause). But this is easy to fix: Add $2n - m$ new boy-girl couples that are “generic animal-lovers,” and match them by triples with all the pets! **3D MATCHING—→ZOE** Recall that in ZOE we are given an $m \times n$ matrix $A$ with $0 – 1$ entries, and we must find a $0 – 1$ vector $x = (x_1, \ldots, x_n)$ such that the $m$ equations $$Ax = 1$$ are satisfied, where by $1$ we denote the column vector of all 1’s. How can we express the 3D MATCHING problem in this framework? ZOE and ILP are very useful problems precisely because they provide a format in which many combinatorial problems can be expressed. In such a formulation we think of the $0 – 1$ variables as describing a solution, and we write equations expressing the constraints of the problem. For example, here is how we express an instance of 3D MATCHING ($m$ boys, $m$ girls, $m$ pets, and $n$ boy-girl-pet triples) in the language of ZOE. We have $0 – 1$ variables $x_1, \ldots, x_n$, one per triple, where $x_i = 1$ means that the $i$th triple is chosen for the matching, and $x_i = 0$ means that it is not chosen. Now all we have to do is write equations stating that the solution described by the $x_i$’s is a legitimate matching. For each boy (or girl, or pet), suppose that the triples containing him (or her, or it) are those numbered $j_1, j_2, \ldots, j_k$; the appropriate equation is then $$x_{j_1} + x_{j_2} + \cdots + x_{j_k} = 1,$$ which states that exactly one of these triples must be included in the matching. For example, here is the $A$ matrix for an instance of 3D MATCHING we saw earlier. The five columns of $A$ correspond to the five triples, while the nine rows are for Al, Bob, Chet, Alice, Beatrice, Carol, Armadillo, Bobcat, and Canary, respectively. It is straightforward to argue that solutions to the two instances translate back and forth. **ZOE $\rightarrow$ SUBSET SUM** This is a reduction between two special cases of ILP: one with many equations but only $0-1$ coefficients, and the other with a single equation but arbitrary integer coefficients. The reduction is based on a simple and time-honored idea: $0-1$ vectors can encode numbers! For example, given this instance of ZOE: $$A = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0 & 0 \\ 1 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 \end{pmatrix},$$ we are looking for a set of columns of $A$ that, added together, make up the all-1’s vector. But if we think of the columns as binary integers (read from top to bottom), we are looking for a subset of the integers $18, 5, 4, 8$ that add up to the binary integer $11111_2 = 31$. And this is an instance of SUBSET SUM. The reduction is complete! Except for one detail, the one that usually spoils the close connection between $0-1$ vectors and binary integers: carry. Because of carry, 5-bit binary integers can add up to 31 (for example, $5 + 6 + 20 = 31$ or, in binary, $00101_2 + 00110_2 + 10100_2 = 11111_2$) even when the sum of the corresponding vectors is not $(1,1,1,1,1)$. But this is easy to fix: Think of the column vectors not as integers in base 2, but as integers in base $n+1$—one more than the number of columns. This way, since at most $n$ integers are added, and all their digits are 0 and 1, there can be no carry, and our reduction works. **ZOE $\rightarrow$ ILP** 3SAT is a special case of SAT—or, SAT is a generalization of 3SAT. By special case we mean that the instances of 3SAT are a subset of the instances of SAT (in particular, the ones with no long clauses), and the definition of solution is the same in both problems (an assignment Figure 8.10 Rudrata cycle with paired edges: \( C = \{ (e_1, e_3), (e_5, e_6), (e_4, e_5), (e_3, e_7), (e_3, e_8) \} \). satisfying all clauses). Consequently, there is a reduction from 3SAT to SAT, in which the input undergoes no transformation, and the solution to the target instance is also kept unchanged. In other words, functions \( f \) and \( h \) from the reduction diagram (on page 259) are both the identity. This sounds trivial enough, but it is a very useful and common way of establishing that a problem is \( \textbf{NP} \)-complete: Simply notice that it is a generalization of a known \( \textbf{NP} \)-complete problem. For example, the SET COVER problem is \( \textbf{NP} \)-complete because it is a generalization of VERTEX COVER (and also, incidentally, of 3D MATCHING). See Exercise 8.10 for more examples. Often it takes a little work to establish that one problem is a special case of another. The reduction from ZOE to ILP is a case in point. In ILP we are looking for an integer vector \( x \) that satisfies \( Ax \leq b \), for given matrix \( A \) and vector \( b \). To write an instance of ZOE in this precise form, we need to rewrite each equation of the ZOE instance as two inequalities (recall the transformations of Section 7.1.4), and to add for each variable \( x_i \) the inequalities \( x_i \leq 1 \) and \( -x_i \leq 0 \). ZOE \rightarrow \textbf{RUDRATA CYCLE} In the RUDRATA CYCLE problem we seek a cycle in a graph that visits every vertex exactly once. We shall prove it \( \textbf{NP} \)-complete in two stages: first we will reduce ZOE to a generalization of RUDRATA CYCLE, called RUDRATA CYCLE WITH PAIRED EDGES, and then we shall see how to get rid of the extra features of that problem and reduce it to the plain RUDRATA CYCLE problem. In an instance of RUDRATA CYCLE WITH PAIRED EDGES we are given a graph \( G = (V, E) \) and a set \( C \subseteq E \times E \) of pairs of edges. We seek a cycle that (1) visits all vertices once, like a Rudrata cycle should, and (2) for every pair of edges \((e, e')\) in \( C \), traverses either edge \( e \) or edge \( e' \)—exactly one of them. In the simple example of Figure 8.10 a solution is shown in bold. Notice that we allow two or more parallel edges between two nodes—a feature that doesn’t make sense in most graph problems—since now the different copies of an edge can be paired with other copies of edges in ways that do make a difference. Now for the reduction of ZOE to RUDRATA CYCLE WITH PAIRED EDGES. Given an instance of ZOE, $Ax = 1$ (where $A$ is an $m \times n$ matrix with 0–1 entries, and thus describes $m$ equations in $n$ variables), the graph we construct has the very simple structure shown in Figure 8.11: a cycle that connects $m+n$ collections of parallel edges. For each variable $x_i$ we have two parallel edges (corresponding to $x_i = 1$ and $x_i = 0$). And for each equation $x_{j_1} + \cdots + x_{j_k} = 1$ involving $k$ variables we have $k$ parallel edges, one for every variable appearing in the equation. This is the whole graph. Evidently, any Rudrata cycle in this graph must traverse the $m+n$ collections of parallel edges one by one, choosing one edge from each collection. This way, the cycle “chooses” for each variable a value—0 or 1—and, for each equation, a variable appearing in it. The whole reduction can’t be this simple, of course. The structure of the matrix $A$ (and not just its dimensions) must be reflected somewhere, and there is one place left: the set $C$ of pairs of edges such that exactly one edge in each pair is traversed. For every equation (recall there are $m$ in total), and for every variable $x_i$ appearing in it, we add to $C$ the pair $(e, e')$ where $e$ is the edge corresponding to the appearance of $x_i$ in that particular equation (on the left-hand side of Figure 8.11), and $e'$ is the edge corresponding to the variable assignment $x_i = 0$ (on the right side of the figure). This completes the construction. Take any solution of this instance of RUDRATA CYCLE WITH PAIRED EDGES. As discussed before, it picks a value for each variable and a variable for every equation. We claim that the values thus chosen are a solution to the original instance of ZOE. If a variable $x_i$ has value 1, then the edge $x_i = 0$ is not traversed, and thus all edges associated with $x_i$ on the equation. side must be traversed (since they are paired in $C$ with the $x_i = 0$ edge). So, in each equation exactly one of the variables appearing in it has value 1—which is the same as saying that all equations are satisfied. The other direction is straightforward as well: from a solution to the instance of ZOE one easily obtains an appropriate Rudrata cycle. **Getting Rid of the Edge Pairs.** So far we have a reduction from ZOE to RUDRATA CYCLE WITH PAIRED EDGES; but we are really interested in RUDRATA CYCLE, which is a special case of the problem with paired edges: the one in which the set of pairs $C$ is empty. To accomplish our goal, we need, as usual, to find a way of getting rid of the unwanted feature—in this case the edge pairs. Consider the graph shown in Figure 8.12, and suppose that it is a part of a larger graph $G$ in such a way that only the four endpoints $a, b, c, d$ touch the rest of the graph. We claim that this graph has the following important property: in any Rudrata cycle of $G$ the subgraph shown must be traversed in one of the two ways shown in bold in Figure 8.12(b) and (c). Here is why. Suppose that the cycle first enters the subgraph from vertex $a$ continuing to $f$. Then it must continue to vertex $g$, because $g$ has degree 2 and so it must be visited immediately after one of its adjacent nodes is visited—otherwise there is no way to include it in the cycle. Hence we must go on to node $h$, and here we seem to have a choice. We could continue on to $j$, or return to $c$. But if we take the second option, how are we going to visit the rest of the subgraph? (A Rudrata cycle must leave no vertex unvisited.) It is easy to see that this would be impossible, and so from $h$ we have no choice but to continue to $j$ and from there to visit the rest of the graph as shown in Figure 8.12(b). By symmetry, if the Rudrata cycle enters this subgraph at $c$, it must traverse it as in Figure 8.12(c). And these are the only two ways. But this property tells us something important: this gadget behaves just like two edges $\{a, b\}$ and $\{c, d\}$ that are paired up in the RUDRATA CYCLE WITH PAIRED EDGES problem (see Figure 8.12(d)). The rest of the reduction is now clear: to reduce RUDRATA CYCLE WITH PAIRED EDGES to RUDRATA CYCLE we go through the pairs in $C$ one by one. To get rid of each pair $\{(a, b), \{c, d\}\}$ we replace the two edges with the gadget in Figure 8.12(a). For any other pair in $C$ that involves $\{a, b\}$, we replace the edge $\{a, b\}$ with the new edge $\{a, f\}$, where $f$ is from the gadget: the traversal of $\{a, f\}$ is from now on an indication that edge $\{a, b\}$ in the old graph would be traversed. Similarly, $\{c, h\}$ replaces $\{c, d\}$. After $|C|$ such replacements (performed in polynomial time, since each replacement adds only 12 vertices to the graph) we are done, and the Rudrata cycles in the resulting graph will be in one-to-one correspondence with the Rudrata cycles in the original graph that conform to the constraints in $C$. Figure 8.12 A gadget for enforcing paired behavior. (a) (b) (c) (d) \[ C = \{\{a, b\}, \{c, d\}\} \] **RUDRATA CYCLE—TSP** Given a graph \( G = (V, E) \), construct the following instance of the TSP: the set of cities is the same as \( V \), and the distance between cities \( u \) and \( v \) is 1 if \( \{u, v\} \) is an edge of \( G \) and \( 1 + \alpha \) otherwise, for some \( \alpha > 1 \) to be determined. The budget of the TSP instance is equal to the number of nodes, \( |V| \). It is easy to see that if \( G \) has a Rudrata cycle, then the same cycle is also a tour within the budget of the TSP instance; and that conversely, if \( G \) has no Rudrata cycle, then there is no solution: the cheapest possible TSP tour has cost at least \( n + \alpha \) (it must use at least one edge of length \( 1 + \alpha \), and the total length of all \( n - 1 \) others is at least \( n - 1 \)). Thus RUDRATA CYCLE reduces to TSP. In this reduction, we introduced the parameter \( \alpha \) because by varying it, we can obtain two interesting results. If \( \alpha = 1 \), then all distances are either 1 or 2, and so this instance of the TSP satisfies the triangle inequality: if \( i, j, k \) are cities, then \( d_{ij} + d_{jk} \geq d_{ik} \) (proof: \( a + b \geq c \) holds for any numbers \( 1 \leq a, b, c \leq 2 \)). This is a special case of the TSP which is of practical importance and which, as we shall see in Chapter 9, is in a certain sense easier, because it can be efficiently approximated. If on the other hand \( \alpha \) is large, then the resulting instance of the TSP may not satisfy the triangle inequality, but has another important property: either it has a solution of cost \( n \) or less, or all its solutions have cost at least \( n + \alpha \) (which now can be arbitrarily larger than \( n \)). There can be nothing in between! As we shall see in Chapter 9, this important gap property implies that, unless \( P = NP \), no approximation algorithm is possible. **ANY PROBLEM IN NP—SAT** We have reduced SAT to the various search problems in Figure 8.7. Now we come full circle and argue that all these problems—and in fact all problems in NP—reduce to SAT. In particular, we shall show that all problems in NP can be reduced to a generalization of SAT which we call CIRCUIT SAT. In CIRCUIT SAT we are given a (Boolean) circuit (see Figure 8.13, and recall Section 7.7), a dag whose vertices are gates of five different types: - AND gates and OR gates have indegree 2. - NOT gates have indegree 1. - Known input gates have no incoming edges and are labeled false or true. - Unknown input gates have no incoming edges and are labeled "?". One of the sinks of the dag is designated as the output gate. Given an assignment of values to the unknown inputs, we can evaluate the gates of the circuit in topological order, using the rules of Boolean logic (such as \( \text{false} \lor \text{true} = \text{true} \)), until we obtain the value at the output gate. This is the value of the circuit for the particular assignment to the inputs. For instance, the circuit in Figure 8.13 evaluates to false under the assignment true, false, true (from left to right). CIRCUIT SAT is then the following search problem: Given a circuit, find a truth assignment for the unknown inputs such that the output gate evaluates to true, or report that no such Figure 8.13 An instance of CIRCUIT SAT. CIRCUIT SAT is a generalization of SAT. To see why, notice that SAT asks for a satisfying truth assignment for a circuit that has this simple structure: a bunch of AND gates at the top join the clauses, and the result of this big AND is the output. Each clause is the OR of its literals. And each literal is either an unknown input gate or the NOT of one. There are no known input gates. Going in the other direction, CIRCUIT SAT can also be reduced to SAT. Here is how we can rewrite any circuit in conjunctive normal form (the AND of clauses): for each gate $g$ in the circuit we create a variable $g$, and we model the effect of the gate using a few clauses: (Do you see that these clauses do, in fact, force exactly the desired effect?) And to finish up, if $g$ is the output gate, we force it to be true by adding the clause $(g)$. The resulting instance of SAT is equivalent to the given instance of CIRCUIT SAT: the satisfying truth assignments of this conjunctive normal form are in one-to-one correspondence with those of the circuit. Now that we know CIRCUIT SAT reduces to SAT, we turn to our main job, showing that all search problems reduce to CIRCUIT SAT. So, suppose that \(A\) is a problem in \(\text{NP}\). We must discover a reduction from \(A\) to CIRCUIT SAT. This sounds very difficult, because we know almost nothing about \(A\)! All we know about \(A\) is that it is a search problem, so we must put this knowledge to work. The main feature of a search problem is that any solution to it can quickly be checked: there is an algorithm \(C\) that checks, given an instance \(I\) and a proposed solution \(S\), whether or not \(S\) is a solution of \(I\). Moreover, \(C\) makes this decision in time polynomial in the length of \(I\) (we can assume that \(S\) is itself encoded as a binary string, and we know that the length of this string is polynomial in the length of \(I\)). Recall now our argument in Section 7.7 that any polynomial algorithm can be rendered as a circuit, whose input gates encode the input to the algorithm. Naturally, for any input length (number of input bits) the circuit will be scaled to the appropriate number of inputs, but the total number of gates of the circuit will be polynomial in the number of inputs. If the polynomial algorithm in question solves a problem that requires a yes or no answer (as is the situation with \(C\): “Does \(S\) encode a solution to the instance encoded by \(I\)?”), then this answer is given at the output gate. We conclude that, given any instance \(I\) of problem \(A\), we can construct in polynomial time a circuit whose known inputs are the bits of \(I\), and whose unknown inputs are the bits of \(S\), such that the output is true if and only if the unknown inputs spell a solution \(S\) of \(I\). In other words, the satisfying truth assignments to the unknown inputs of the circuit are in one-to-one correspondence with the solutions of instance \(I\) of \(A\). The reduction is complete. Unsolvable problems At least an \( \text{NP} \)-complete problem can be solved by some algorithm—the trouble is that this algorithm will be exponential. But it turns out there are perfectly decent computational problems for which \textit{no algorithms exist at all!} One famous problem of this sort is an arithmetical version of SAT. Given a polynomial equation in many variables, perhaps \[ x^3yz + 2y^4z^2 - 7xy^5z = 6, \] are there integer values of \( x, y, z \) that satisfy it? There is no algorithm that solves this problem. No algorithm at all, polynomial, exponential, doubly exponential, or worse! Such problems are called \textit{unsolvable}. The first unsolvable problem was discovered in 1936 by Alan M. Turing, then a student of mathematics at Cambridge, England. When Turing came up with it, there were no computers or programming languages (in fact, it can be argued that these things came about \textit{exactly because} this brilliant thought occurred to Turing). But today we can state it in familiar terms. Suppose that you are given a program in your favorite programming language, along with a particular input. Will the program ever terminate, once started on this input? This is a very reasonable question. Many of us would be ecstatic if we had an algorithm, call it \( \text{terminates}(p, x) \), that took as input a file containing a program \( p \), and a file of data \( x \), and after grinding away, finally told us whether or not \( p \) would ever stop if started on \( x \). But how would you go about writing the program \( \text{terminates} \)? (If you haven’t seen this before, it’s worth thinking about it for a while, to appreciate the difficulty of writing such an “universal infinite-loop detector.”) Well, you can’t. \textit{Such an algorithm does not exist!} And here is the proof: Suppose we actually had such a program \( \text{terminates}(p, x) \). Then we could use it as a subroutine of the following evil program: \[ \text{function\ paradox}(z:\text{file}) 1: \text{if \ terminates}(z, z) \text{ goto 1} \] Notice what \text{paradox} does: it terminates if and only if program \( z \) does not terminate when given its own code as input. You should smell trouble. What if we put this program in a file named \text{paradox} and we executed \text{paradox} (\text{paradox})? Would this execution ever stop? Or not? Neither answer is possible. Since we arrived at this contradiction by assuming that there is an algorithm for telling whether programs terminate, we must conclude that this problem cannot be solved by any algorithm. By the way, all this tells us something important about programming: It will never be automated, it will forever depend on discipline, ingenuity, and hackery. We now know that you can’t tell whether a program has an infinite loop. But can you tell if it has a buffer overrun? Do you see how to use the unsolvability of the “halting problem” to show that this too, is unsolvable? Exercises 8.1. **Optimization versus search.** Recall the traveling salesman problem: TSP \[\text{Input: A matrix of distances; a budget } b\] \[\text{Output: A tour which passes through all the cities and has length } \leq b, \text{ if such a tour exists.}\] The optimization version of this problem asks directly for the shortest tour. TSP-OPT \[\text{Input: A matrix of distances}\] \[\text{Output: The shortest tour which passes through all the cities.}\] Show that if TSP can be solved in polynomial time, then so can TSP-OPT. 8.2. **Search versus decision.** Suppose you have a procedure which runs in polynomial time and tells you whether or not a graph has a Rudrata path. Show that you can use it to develop a polynomial-time algorithm for RUDRATA PATH (which returns the actual path, if it exists). 8.3. **STINGY SAT** is the following problem: given a set of clauses (each a disjunction of literals) and an integer \( k \), find a satisfying assignment in which at most \( k \) variables are true, if such an assignment exists. Prove that STINGY SAT is \( \text{NP} \)-complete. 8.4. Consider the CLIQUE problem restricted to graphs in which every vertex has degree at most 3. Call this problem CLIQUE-3. (a) Prove that CLIQUE-3 is in \( \text{NP} \). (b) What is wrong with the following proof of \( \text{NP} \)-completeness for CLIQUE-3? We know that the CLIQUE problem in general graphs is \( \text{NP} \)-complete, so it is enough to present a reduction from CLIQUE-3 to CLIQUE. Given a graph \( G \) with vertices of degree \( \leq 3 \), and a parameter \( q \), the reduction leaves the graph and the parameter unchanged: clearly the output of the reduction is a possible input for the CLIQUE problem. Furthermore, the answer to both problems is identical. This proves the correctness of the reduction and, therefore, the \( \text{NP} \)-completeness of CLIQUE-3. (c) It is true that the VERTEX COVER problem remains \( \text{NP} \)-complete even when restricted to graphs in which every vertex has degree at most 3. Call this problem VC-3. What is wrong with the following proof of \( \text{NP} \)-completeness for CLIQUE-3? We present a reduction from VC-3 to CLIQUE-3. Given a graph \( G = (V, E) \) with node degrees bounded by 3, and a parameter \( b \), we create an instance of CLIQUE-3 by leaving the graph unchanged and switching the parameter to \( |V| - b \). Now, a subset \( C \subseteq V \) is a vertex cover in \( G \) if and only if the complementary set \( V - C \) is a clique in \( G \). Therefore \( G \) has a vertex cover of size \( \leq b \) if and only if it has a clique of size \( \geq |V| - b \). This proves the correctness of the reduction and, consequently, the \( \text{NP} \)-completeness of CLIQUE-3. (d) Describe an \( O(|V|^4) \) algorithm for CLIQUE-3. 8.5. Give a simple reduction from 3D MATCHING to SAT, and another from RUDRATA CYCLE to SAT. *(Hint: In the latter case you may use variables \( x_{ij} \) whose intuitive meaning is “vertex \( i \) is the \( j \)th vertex of the Hamilton cycle”; you then need to write clauses that express the constraints of the problem.)* 8.6. On page 266 we saw that 3SAT remains NP-complete even when restricted to formulas in which each literal appears at most twice. (a) Show that if each literal appears at most once, then the problem is solvable in polynomial time. (b) Show that INDEPENDENT SET remains NP-complete even in the special case when all the nodes in the graph have degree at most 4. 8.7. Consider a special case of 3SAT in which all clauses have exactly three literals, and each variable appears at most three times. Show that this problem can be solved in polynomial time. (Hint: create a bipartite graph with clauses on the left, variables on the right, and edges whenever a variable appears in a clause. Use Exercise 7.30 to show that this graph has a matching.) 8.8. In the EXACT 4SAT problem, the input is a set of clauses, each of which is a disjunction of exactly four literals, and such that each variable occurs at most once in each clause. The goal is to find a satisfying assignment, if one exists. Prove that EXACT 4SAT is NP-complete. 8.9. In the HITTING SET problem, we are given a family of sets \( \{S_1, S_2, \ldots, S_n\} \) and a budget \( b \), and we wish to find a set \( H \) of size \( \leq b \) which intersects every \( S_i \), if such an \( H \) exists. In other words, we want \( H \cap S_i \neq \emptyset \) for all \( i \). Show that HITTING SET is NP-complete. 8.10. Proving NP-completeness by generalization. For each of the problems below, prove that it is NP-complete by showing that it is a generalization of some NP-complete problem we have seen in this chapter. (a) SUBGRAPH ISOMORPHISM: Given as input two undirected graphs \( G \) and \( H \), determine whether \( G \) is a subgraph of \( H \) (that is, whether by deleting certain vertices and edges of \( H \) we obtain a graph that is, up to renaming of vertices, identical to \( G \)), and if so, return the corresponding mapping of \( V(G) \) into \( V(H) \). (b) LONGEST PATH: Given a graph \( G \) and an integer \( g \), find in \( G \) a simple path of length \( g \). (c) MAX SAT: Given a CNF formula and an integer \( g \), find a truth assignment that satisfies at least \( g \) clauses. (d) DENSE SUBGRAPH: Given a graph and two integers \( a \) and \( b \), find a set of \( a \) vertices of \( G \) such that there are at least \( b \) edges between them. (e) SPARSE SUBGRAPH: Given a graph and two integers \( a \) and \( b \), find a set of \( a \) vertices of \( G \) such that there are at most \( b \) edges between them. (f) SET COVER. (This problem generalizes two known NP-complete problems.) (g) RELIABLE NETWORK: We are given two \( n \times n \) matrices, a distance matrix \( d_{ij} \) and a connectivity requirement matrix \( r_{ij} \), as well as a budget \( b \); we must find a graph \( G = (\{1, 2, \ldots, n\}, E) \) such that (1) the total cost of all edges is \( b \) or less and (2) between any two distinct vertices \( i \) and \( j \) there are \( r_{ij} \) vertex-disjoint paths. (Hint: Suppose that all \( d_{ij} \)'s are 1 or 2, \( b = n \), and all \( r_{ij} \)'s are 2. Which well known NP-complete problem is this?) 8.11. There are many variants of Rudrata’s problem, depending on whether the graph is undirected or directed, and whether a cycle or path is sought. Reduce the DIRECTED RUDRATA PATH problem to each of the following. (a) The (undirected) RUDRATA PATH problem. (b) The undirected RUDRATA \((s, t)\)-PATH problem, which is just like RUDRATA PATH except that the endpoints of the path are specified in the input. 8.12. The \(k\)-SPANNING TREE problem is the following. \textit{Input:} An undirected graph \(G = (V, E)\) \textit{Output:} A spanning tree of \(G\) in which each node has degree \(\leq k\), if such a tree exists. Show that for any \(k \geq 2\): (a) \(k\)-SPANNING TREE is a search problem. (b) \(k\)-SPANNING TREE is \textbf{NP}-complete. (\textit{Hint:} Start with \(k = 2\) and consider the relation between this problem and RUDRATA PATH.) 8.13. Determine which of the following problems are \textbf{NP}-complete and which are solvable in polynomial time. In each problem you are given an undirected graph \(G = (V, E)\), along with: (a) A set of nodes \(L \subseteq V\), and you must find a spanning tree such that its set of leaves includes the set \(L\). (b) A set of nodes \(L \subseteq V\), and you must find a spanning tree such that its set of leaves is precisely the set \(L\). (c) A set of nodes \(L \subseteq V\), and you must find a spanning tree such that its set of leaves is included in the set \(L\). (d) An integer \(k\), and you must find a spanning tree with \(k\) or fewer leaves. (e) An integer \(k\), and you must find a spanning tree with \(k\) or more leaves. (f) An integer \(k\), and you must find a spanning tree with exactly \(k\) leaves. (\textit{Hint:} All the \textbf{NP}-completeness proofs are by generalization, except for one.) 8.14. Prove that the following problem is \textbf{NP}-complete: given an undirected graph \(G = (V, E)\) and an integer \(k\), return a clique of size \(k\) as well as an independent set of size \(k\), provided both exist. 8.15. Show that the following problem is \textbf{NP}-complete. \textbf{MAXIMUM COMMON SUBGRAPH} \textit{Input:} Two graphs \(G_1 = (V_1, E_1)\) and \(G_2 = (V_2, E_2)\); a budget \(b\). \textit{Output:} Two set of nodes \(V'_1 \subseteq V_1\) and \(V'_2 \subseteq V_2\) whose deletion leaves at least \(b\) nodes in each graph, and makes the two graphs identical. 8.16. We are feeling experimental and want to create a new dish. There are various ingredients we can choose from and we’d like to use as many of them as possible, but some ingredients don’t go well with others. If there are \(n\) possible ingredients (numbered 1 to \(n\)), we write down an \(n \times n\) matrix giving the discord between any pair of ingredients. This discord is a real number between 0.0 and 1.0, where 0.0 means “they go together perfectly” and 1.0 means “they really don’t go together.” Here’s an example matrix when there are five possible ingredients. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & 1 & 2 & 3 & 4 & 5 \\ \hline 1 & 0.0 & 0.4 & 0.2 & 0.9 & 1.0 \\ 2 & 0.4 & 0.0 & 0.1 & 1.0 & 0.2 \\ 3 & 0.2 & 0.1 & 0.0 & 0.8 & 0.5 \\ 4 & 0.9 & 1.0 & 0.8 & 0.0 & 0.2 \\ 5 & 1.0 & 0.2 & 0.5 & 0.2 & 0.0 \\ \hline \end{tabular} \end{center}
{"Source-Url": "http://www.kau.edu.sa:80/Files/830/Files/57480_Algorithms_Part14.pdf", "len_cl100k_base": 13480, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 65910, "total-output-tokens": 14830, "length": "2e13", "weborganizer": {"__label__adult": 0.0004761219024658203, "__label__art_design": 0.0006198883056640625, "__label__crime_law": 0.0005707740783691406, "__label__education_jobs": 0.01149749755859375, "__label__entertainment": 0.00019216537475585935, "__label__fashion_beauty": 0.00026917457580566406, "__label__finance_business": 0.0005092620849609375, "__label__food_dining": 0.0007958412170410156, "__label__games": 0.0016918182373046875, "__label__hardware": 0.002246856689453125, "__label__health": 0.0008859634399414062, "__label__history": 0.0008258819580078125, "__label__home_hobbies": 0.0003571510314941406, "__label__industrial": 0.0011930465698242188, "__label__literature": 0.0012388229370117188, "__label__politics": 0.0004622936248779297, "__label__religion": 0.000927448272705078, "__label__science_tech": 0.37646484375, "__label__social_life": 0.0002570152282714844, "__label__software": 0.0089874267578125, "__label__software_dev": 0.58740234375, "__label__sports_fitness": 0.0005006790161132812, "__label__transportation": 0.001277923583984375, "__label__travel": 0.0003032684326171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48023, 0.01982]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48023, 0.48166]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48023, 0.90705]], "google_gemma-3-12b-it_contains_pii": [[0, 758, false], [758, 1911, null], [1911, 4938, null], [4938, 7368, null], [7368, 10766, null], [10766, 13116, null], [13116, 16527, null], [16527, 19592, null], [19592, 21694, null], [21694, 23991, null], [23991, 26067, null], [26067, 29099, null], [29099, 29205, null], [29205, 32485, null], [32485, 33388, null], [33388, 35514, null], [35514, 38483, null], [38483, 41629, null], [41629, 45034, null], [45034, 48023, null]], "google_gemma-3-12b-it_is_public_document": [[0, 758, true], [758, 1911, null], [1911, 4938, null], [4938, 7368, null], [7368, 10766, null], [10766, 13116, null], [13116, 16527, null], [16527, 19592, null], [19592, 21694, null], [21694, 23991, null], [23991, 26067, null], [26067, 29099, null], [29099, 29205, null], [29205, 32485, null], [32485, 33388, null], [33388, 35514, null], [35514, 38483, null], [38483, 41629, null], [41629, 45034, null], [45034, 48023, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48023, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 48023, null]], "pdf_page_numbers": [[0, 758, 1], [758, 1911, 2], [1911, 4938, 3], [4938, 7368, 4], [7368, 10766, 5], [10766, 13116, 6], [13116, 16527, 7], [16527, 19592, 8], [19592, 21694, 9], [21694, 23991, 10], [23991, 26067, 11], [26067, 29099, 12], [29099, 29205, 13], [29205, 32485, 14], [32485, 33388, 15], [33388, 35514, 16], [35514, 38483, 17], [38483, 41629, 18], [41629, 45034, 19], [45034, 48023, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48023, 0.01091]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
e1ff949c6a78d22c63b5269034f2d683f746757c
Conference Organizers Andrew J. Hutton, Steamballoon, Inc., Linux Symposium, Thin Lines Mountaineering C. Craig Ross, Linux Symposium Review Committee Andrew J. Hutton, Steamballoon, Inc., Linux Symposium, Thin Lines Mountaineering Dirk Hohndel, Intel Gerrit Huizenga, IBM Dave Jones, Red Hat, Inc. Matthew Wilson, rPath C. Craig Ross, Linux Symposium Proceedings Formatting Team John W. Lockhart, Red Hat, Inc. Gurhan Ozen, Red Hat, Inc. Eugene Teo, Red Hat, Inc. Kyle McMartin, Red Hat, Inc. Jake Edge, LWN.net Robyn Bergeron Dave Boucher, IBM Mats Wichmann, Intel Authors retain copyright to all submitted papers, but have granted unlimited redistribution rights to all as a condition of submission. Abstract Mobile Linux users demand system suspend-to-RAM (STR) capability due to its combination of low latency and high energy savings. Here we survey the design and operation of STR in Linux, focusing on its implementation on high-volume x86 ACPI-compliant systems. We point out significant weaknesses in the current design, and propose future enhancements. This paper will be of interest primarily to a technical audience of kernel and device driver developers, but others in the community who deploy, support, or use system sleep states may also find it useful. 1 Introduction When a computer is powered on, it enters the working state and runs applications. When a computer powered off, it consumes (almost) no energy, but it doesn’t run applications. As illustrated in Figure 1, several power-saving system sleep states are available between the working and power-off states. System sleep states share several properties: - Energy is saved. - The CPUs do not execute code. - The I/O devices are in low-power states. - Application state is preserved. However, system sleep states differ from each other in two major ways: - The size of the energy-saving benefit. • The wake-up latency cost required to return to the working state. Linux supports three types of system-sleep states: standby, suspend-to-RAM, and hibernate-to-disk. Standby is the most shallow system-sleep state. This means that it enjoys minimal wake-up latency. However, standby also delivers the least energy savings of the sleep states. Suspend-to-RAM is a deeper sleep state than standby. STR saves more energy, generally by disabling all of the motherboard components except those necessary to refresh main memory and handle wake-up events. In practice, STR generally enjoys the same latency as standby, yet saves more energy. Thus standby support is typically of little benefit, and computer systems often do not provide it. Suspend-to-RAM in Linux Hibernate-to-disk is deeper than suspend-to-RAM. Indeed, it shares the same (almost zero) energy consumption as the power-off state. This makes hibernate valuable when application state needs to be preserved for a long period. Unfortunately hibernate resume latency is quite high, so it generally is not as useful as STR. Here we focus on suspend-to-RAM, popular due to its combination of relatively low latency and relatively high energy savings. We first introduce the Linux Power Management (PM) Core, which is responsible for controlling suspend and resume operations at the kernel level. Next we describe the role of the platform firmware and ACPI in STR, and we provide an overview of the operations performed by the Linux kernel during suspend and resume. We examine some parts of the kernel’s STR infrastructure in more detail, focusing on the known issues with the current implementation and planned improvements. Finally, we consider some problems related to the handling of hardware, especially graphics adapters, their current workarounds, and future solutions. 2 The Linux Power Management Core STR and standby are available only on systems providing adequate hardware support for them: The system main memory has to be powered and refreshed as appropriate so that its contents are preserved in a sleep state. Moreover, the hardware must provide a mechanism making it possible to wake the system up from that state and pass control back to the operating system (OS) kernel. It is also necessary that power be at least partially removed from devices before putting the system into a sleep state, so that they do not drain energy in vain. The part of the system providing this functionality is often referred to as the platform. It defines the foundation of the system—the motherboard hardware as well as the firmware that ships with it.\(^1\) To carry out STR and standby power transitions, the Linux kernel has to interact with the platform through a well-defined interface. For this purpose, the kernel includes platform drivers responsible for carrying out low-level suspend and resume operations required by particular platforms. They supply a set of callbacks via struct platform_suspend_ops shown in Figure 3. The .valid() and .enter() callbacks are mandatory, while the others are optional. The platform drivers are used by the PM Core. The PM core is generic: it runs on a variety of platforms for which appropriate platform drivers are available, including ACPI-compatible personal computers (PCs) and ARM platforms. This paper focuses primarily on ACPI-compatible platforms. The next section describes how ACPI fits into the system architecture and describes some of the specific capabilities that ACPI provides for suspend/resume. Then we combine the PM core and ACPI discussions by stepping through the suspend and resume sequences on an ACPI platform, in which all six of the platform_suspend_ops are invoked. 3 ACPI and Platform System Architecture Platform hardware defines the programming model seen by software layers above. Platforms often augment this programming model by including an embedded controller (EC) running motherboard firmware. The EC off-loads the main processors by monitoring sensors for \[^1\]The motherboard hardware includes not only the major processor, memory, and I/O sub-systems, but also interrupt controllers, timers, and other logic that is visible to the software layers above. buttons, batteries, temperature, fans, etc. In addition, personal computer (PC) motherboards include a PC-compatible BIOS (Basic Input Output System) responsible for initializing the system and presenting some standard services, such as booting, to the OS kernel. The ACPI specification [ACPI-SPEC] defines a layer of abstraction that sits above the platform-defined layer. ACPI specifies that the platform hardware must support certain standard registers, and that the BIOS be extended to export tables in memory to guide the OS kernel. Figure 4 shows how ACPI fits into the system architecture. Some ACPI tables are simply static data structures that the ACPI BIOS uses to describe the machine to the OS. Other ACPI tables contain functions, known as methods, to be executed in the kernel, encoded in the ACPI Machine Language (AML). AML methods are first expressed in ACPI Source Language (ASL) and then translated into the AML byte code with the help of an ASL compiler. The compiled AML tables are then burned into PROM when the motherboard is manufactured. AML is executed by the ACPI Component Architecture [ACPICA] Core’s AML interpreter residing in the next layer up—the Linux kernel. This design allows AML, which is effectively ACPI BIOS code, to run in kernel context. That, in turn, allows platform designers to use many different types of hardware components and to tailor the interfaces between those components and the OS kernel to their needs. However, since the AML code is obtained by compiling source code written in ASL, the preparation of an ACPI platform involves software development that is prone to human error. There are additional problems related to the ACPI’s abstraction capability, some of which are discussed in Section 11. ### 4 ACPI and Suspend to RAM Aside from its configuration and run-time power-management responsibilities, ACPI also standardizes several key hooks for suspend and resume: 1. Device power states. 2. Device wake-up control. 3. Standard mechanism to enter system sleep states. 4. Firmware wake-up vector. When discussing ACPI and devices, it is important to realize that ACPI firmware is stored into EEPROM when a motherboard is created. Thus ACPI can be aware of all logic and devices that are permanently attached to the motherboard, including on-board legacy and PCI devices, device interrupt routing, and even PCI slot hot-plug control. But ACPI has no knowledge of the devices that may later be plugged into I/O slots or expansion buses. ACPI defines “Device Power States” (D-states) in terms of power consumption, device context retention, device-driver restore responsibilities, and restore latency. The PCI Power Management specification [PCI-PM] similarly defines the D-states used by PCI devices; ACPI extends the notion of D-states to non-PCI motherboard devices. D-states can be used independently of system sleep states, for run-time device power management. Note, however, that D-states are not performance states; that is, they do not describe reduced levels of performance. A device is non-functional in any D-state other than D0. --- 2 Before ACPI, the choices to support run-time BIOS code were to call the BIOS directly in real mode, which required a real-mode OS, or to invisibly invoke the System Management Mode (SMM). System sleep states, such as STR, mandate the use of D-states. Before D-states were implemented in Linux, we found several systems that would suspend and resume from RAM successfully, but they had devices which would continue to drain energy while the system was suspended—which severely shortened the system’s ability to sleep on battery power. ACPI also defines a mechanism to enable and disable device wake-up capability. When the system is in the working state, this mechanism can be used to selectively wake up a sleeping device from a D-state. When the whole system is suspended, this capability may be used to enable automatic system resume. Many devices export native wake-up control. In particular, modern Ethernet Network Interface Cards (NIC) support Wake-on-LAN (WOL) and their drivers export that function via ethtool(1). Note that this has to be a native capability, because ACPI firmware cannot provide wake-up support for add-on adapters. Today the wake-up support in Linux is in flux. There is a legacy hook for ACPI wake-up capability in /proc/acpi/wakeup, but that interface is nearly unusable, as each entry refers to an arbitrary 4-letter ASL name for a device that the user (or an application) cannot reliably associate with a physical device. The device core provides its own wakeup API in /sys/devices/.../power/wakeup; this is not yet fully integrated with the ACPI wake-up mechanism. The ACPI specification carefully describes the sequence of events that should take place to implement both suspend and resume. Certain platform ACPI hooks must be invoked at various stages in order for the platform firmware to correctly handle suspend and resume from RAM. These hooks are mentioned in later sections that detail the suspend and resume sequence. Finally, ACPI provides a standard mechanism to tell the platform what address in the kernel to return to upon resume. More information about ACPI in Linux can be found in previous Linux Symposium presentations [ACPI-OLS], as well as on the Linux/ACPI project home page [ACPI-URL]. ## 5 Suspend Overview There are two ways to invoke the Linux kernel’s suspend capability. First, by writing mem into /sys/power/suspend Second, with the SNAPSHOT_S2RAM ioctl on /dev/snapshot, a device provided by the userspace hibernation driver. This second method is provided only as a means to implement the mixed suspend-hibernation feature and will not be discussed here. Once mem has been written to /sys/power/state, the PM core utilizes the platform_suspend_ops in the steps shown in Figure 5. First, it invokes the platform driver’s .valid() method, in order to check whether the platform supports suspend-to-RAM. The .valid() callback takes one argument representing the intended system-sleep state. Two values may be passed by the PM core, PM_SUSPEND_STANDBY and PM_SUSPEND_MEM, representing the standby and the suspend sleep states, respectively. The .valid() callback returns true if the platform driver can associate the state requested by the PM core with one of the system-sleep states supported by the platform. Note, however, that on non-ACPI systems the choice of the actual sleep state is up to the platform. The state should reflect the characteristics requested by the core (e.g., the STR state characteristics if PM_SUSPEND_MEM is used), but the platform may support more than two such sleep states. In that case, the platform driver is free to choose whichever sleep state it considers appropriate. But the choice is made later, not during .valid(). If the .valid() platform callback returns true, the PM core attempts to acquire pm_mutex to prevent concurrent system-wide power transitions from being started. If that succeeds, it goes on to execute sys_sync() to help prevent unwritten file system data from being lost in case the power transition fails in an unrecoverable manner. It switches the system console to a text terminal in order to prevent the X server from interfering with device power-down. It invokes suspend notifiers to let registered kernel subsystems know about the impending suspend, as described in Section 7. It freezes the tasks, as detailed in Section 8. This puts all user processes into a safe, static state: They do not hold any semaphores or mutexes, and they cannot run until the PM core allows them to. The old STR user interface, based on /proc/acpi/sleep, is deprecated. This feature first creates a hibernation image and then suspends to RAM. If the battery lasts, then the system can resume from RAM, but if the battery fails, then the hibernation image is used. An experimental implementation is provided by the s2both utility included in the user-land suspend package at http://suspend.sf.net. Next the PM core invokes the .begin() platform-suspend callback to notify the platform driver of the desired power transition. The .begin() callback takes one argument representing the system-sleep state requested by the PM core. The interpretation as well as the possible values of its argument are the same as for the .valid() callback. At this point, however, the platform driver chooses the sleep state in which to place the system and stores the information for future reference. The choice made by the platform driver is not directly conveyed to the PM core, which does not have the means to represent various sleep states that may be supported by different platforms and does not really need that information. Still, the sleep state chosen by the platform driver, the target state of the transition, may determine the low-power states (D-states on ACPI systems) into which devices should be put. For this reason, the platform driver provides device drivers with information on the low-power states in which they are supposed to place their devices. The .begin() callback returns 0 on success or an error code on failure, in which case the PM core aborts the transition. After .begin() succeeds, the PM core blocks system console messages in order to protect the console devices from being accessed while they are suspended, and starts suspending devices (i.e., putting them in low-power states). Devices are suspended in reverse order of registration; in this way the kernel should never find itself stuck in a situation where it needs to access a suspended device or where a suspended parent has an active child. To suspend a device, the PM core invokes the suspend callbacks provided by the device-class driver, device-type driver, and bus-type driver associated with it, in that order. Device-class, device-type, and bus-type drivers can each implement one device-suspend method, called .suspend(), and one corresponding device-resume method, called .resume(). Bus-type drivers can define extra device-suspend and -resume methods to be executed with interrupts disabled, called .suspend_late() and .resume_early(), respectively. These additional methods are invoked after the non-boot CPUs have been disabled, as described below. Each of the suspend callbacks takes two arguments, a pointer to the appropriate device structure and a \texttt{pm\_message\_t} argument, representing the transition being carried out. Currently five values of this argument are recognized: \texttt{PMSG\_ON}, \texttt{PMSG\_FREEZE}, \texttt{PMSG\_SUSPEND}, \texttt{PMSG\_HIBERNATE}, and \texttt{PMSG\_PRETHAW}. The first represents the transition back to the working state, while \texttt{PMSG\_FREEZE}, \texttt{PMSG\_HIBERNATE}, and \texttt{PMSG\_PRETHAW} are specific to hi- bernation. Thus only \texttt{PMSG\_SUSPEND} is used for standby and STR. Since the same callbacks are in- voked for both suspend and hibernation, they must de- terminate the proper actions to perform on the basis of the \texttt{pm\_message\_t} argument. The device-class, device-type, and bus-type suspend callbacks are responsible for invoking the suspend call- backs implemented by individual device drivers. In principle the names of those suspend callbacks may depend on the device class, device type, or bus type the device belongs to, but traditionally drivers’ suspend callbacks are called \texttt{.suspend()} (or \texttt{.suspend_- late()}) if they are to be executed with interrupts dis- abled). Also, all of them take two arguments, the first of which is a pointer to the device structure and the second of which is as described above. The framework for suspending and resuming devices is going to be changed. The current framework has some deficiencies: It is inflexible and quite inconvenient to use from a device-driver author’s point of view, and it is not adequate for suspending and resuming devices with- out the freezing of tasks. As stated in Section 8, the freezing of tasks is planned to be phased out in the fu- ture, so a new framework for suspending and resuming devices (described in Section 9) is being introduced. After executing all of the device-suspend callbacks, the PM core invokes the \texttt{.prepare()} platform suspend method to prepare the platform for the upcoming transi- tion to a sleep state. For the ACPI platform, the \texttt{_PTS} global-control method is executed at this point. Next the PM core disables non-boot CPUs, with the help of the CPU hot-plug infrastructure. We will not dis- cuss this infrastructure in detail, but it seems important to point out that the CPU hot-plug notifiers are called with special values of their second argument while non- boot CPUs are being disabled during suspend and en- abled during resume. Specifically, the second argu- ment is bitwise OR’ed with \texttt{CPU\_TASKS\_FROZEN}, so that the notifier code can avoid doing things that might lead to a deadlock or cause other problems at these times. The notifier routines should also avoid doing things that are not necessary during suspend or resume, such as un-registering device objects associated with a CPU being disabled—these objects would just have to be re-registered during the subsequent resume, an over- all waste of time.\footnote{It also would mess up the PM core’s internal lists, since the objects would be re-registered while they were still suspended.} After disabling the non-boot CPUs, the PM core dis- able hardware interrupts on the only remaining func- tional CPU and invokes the \texttt{.suspend\_late()} methods implemented by bus-type drivers which, in turn, invoke the corresponding callbacks provided by device drivers. Then the PM core suspends the so-called system devices (also known as \texttt{sysdevs}) by executing their drivers’ \texttt{.suspend()} methods. These take two arguments just like the regular \texttt{.suspend()} methods implemented by normal (i.e., not \texttt{sysdev}) device drivers, and the meaning of the arguments is the same. To complete the suspend, the PM core invokes the \texttt{.enter()} platform-suspend method, which puts the system into the requested sleep state. If the \texttt{.begin()} method is implemented for given platform, the state chosen while it was executed is used and the argument passed to \texttt{.enter()} is ignored. Otherwise, the plat- form driver uses the argument passed to \texttt{.enter()} to determine the state in which to place the system. 6 Resume Overview The resume sequence is the reverse of the suspend se- quence, but some details are noteworthy. Resume is initiated by a wake-up event—a hardware event handled by the platform firmware. This event may be opening a laptop lid, pressing the power button or a special keyboard key, or receiving a magic WOL net- work packet. Devices must be enabled for wake-up before the sus- pend occurs. On ACPI platforms, the power button is always enabled as a wake-up device. The sleep button and lid switches are optional, but if present they too are enabled as wake-up devices. Any platform device may be configured as a wake-up device, but the power, sleep, and lid buttons are standard. When a wake-up event occurs, the platform firmware initializes as much of the system as necessary and passes control to the Linux kernel by performing a jump to a memory location provided during the suspend. The kernel code executed at this point is responsible for switching the CPU to the appropriate mode of operation. This sequence is similar to early boot, so it is generally possible to reuse some pieces of the early initialization code for performing the resume CPU-initialization operations. Once the CPU has been successfully reinitialized, control is passed to the point it would have reached if the system had not been put into the sleep state during the suspend. Consequently the PM core sees the platform-suspend callback .enter() return zero. When that happens, the PM core assumes the system has just woken up from a sleep state and starts to reverse the actions of the suspend operations described in Section 5. It resumes sysdevs by executing their .resume() callbacks, and then it invokes the device-resume callbacks to be executed with interrupts disabled. That is, it executes the .resume_early() callbacks provided by bus-type drivers; they are responsible for invoking the corresponding callbacks implemented by device drivers. All of these callbacks take a pointer to the device object as their only argument. Subsequently the non-boot CPUs are enabled with the help of the CPU hot-plug code. As mentioned above, all of the CPU hot-plug notifiers executed at this time are called with their second argument OR-ed with CPU_TASKS_FROZEN, so that they will avoid registering new device objects or doing things that might result in a deadlock with a frozen task. After enabling the non-boot CPUs, the PM core calls the .finish() platform-suspend method to prepare the platform for resuming devices. In the case of an ACPI platform, the _WAK global-control method is executed at this point. Next the PM core resumes devices by executing the device-resume methods provided by bus-type, device-type, and device-class drivers, in that order. All of these methods are called .resume() and take a device pointer as their only argument. They are responsible for invoking the corresponding methods provided by device --- Footnote: 7For example, protected mode on an i386 PC or 64-bit mode on an x86-64 system. drivers. Although these methods may return error codes, the PM core cannot really do anything about resume errors; the codes are used for debugging purposes only. Devices are resumed in order of registration, the reverse of the order in which they were suspended. Once devices have been resumed, the PM core unblocks the system console so that diagnostic messages can be printed, and calls the .end() platform method. This method is responsible for doing the final platform-resume cleanup. In particular, it assures that the information about the target sleep state of the system stored by .begin() has been discarded by the platform driver. The last three steps of resume are the thawing of tasks, invoking suspend notifiers with the appropriate argument (PM_POST_SUSPEND), and switching the system console back to whatever terminal it had been set to before the suspend started. Finally, the PM core releases pm_mutex. 7 Suspend and Hibernation Notifiers Suspend and hibernation notifiers are available for subsystems that need to perform some preparations before tasks are frozen (see Section 8). For example, if a device driver needs to call request_firmware() before a suspend, that should be done from within a suspend notifier. The notifiers are registered and un-registered using the register_pm_notifier() and unregister_pm_notifier() functions, respectively. Both these functions take one argument, a pointer to an appropriately populated struct notifier_block. If there is no need to un-register a suspend notifier, it can be registered with the help of the simplifying macro pm_notifier(), which takes only a function name and a priority as arguments. The notifiers are called just prior to freezing tasks during both suspend and hibernation with their second argument set to PM_SUSPEND_PREPARE or PM_HIBERNATION_PREPARE, respectively. They are called if suspend or hibernation fails. The PM core does not distinguish these invocations from the calls made during a successful resume; for this reason, the notifier code should be prepared to detect and handle any potential errors resulting from a suspend failure. Regardless, the rule is that if the notifiers were called with PM_SUSPEND_PREPARE during suspend, then they are called with PM_POST_SUSPEND to undo the changes introduced by the previous invocation, either during resume or in a suspend error path. Notifiers return zero on success; otherwise they return appropriate error codes. However, while an error code returned by a notifier called during suspend causes the entire suspend to fail, error codes returned by notifiers called during resume are ignored by the PM core, since it is not able to act on them in any significant way. 8 Freezing Tasks In both suspend and hibernation, tasks are frozen before devices are suspended. This assures that all user-space processes are in a stable state in which they do not hold any semaphores or mutexes, and they will not continue running until the PM core allows them. This mechanism was introduced with hibernation in mind, to prevent data from being written to disks after the hibernation image was created. Otherwise the on-disk data would not reflect the information preserved within the hibernation image, leading to corruption when the system resumed. Historically, Linux’s support for hibernation has been much more robust than its support for STR, and drivers’ suspend and resume callbacks were designed and tested with hibernation in mind. They generally expected tasks to be frozen before they were executed. Since the same callbacks were (and still are) used for both STR and hibernation, it became necessary to freeze tasks before STR as well as before hibernation. The piece of code that freezes tasks is called the freezer. It is invoked by the PM core after the suspend notifiers are called (see Section 7) and just before the .begin() platform-suspend method is executed. It works by traversing the list of all tasks in the system and setting the TIF_FREEZE flag for the ones marked as freezable (i.e., those without the PF_NOFREEZE flag set). It does this first for user-space tasks, calling signal_ wake_up() on each of them. The code uses a busy loop in which the freezer checks if there still are any tasks with TIF_FREEZE set. The loop finishes when there are none left, or the only remaining ones also have the PF_FREEZER_SKIP flag set.\footnote{The freezer distinguishes user-space tasks from kernel threads on the basis of the task’s mm pointer. If this pointer is NULL or has only been set temporarily, the task is regarded as a kernel thread; otherwise it is assumed to belong to user-space.} The tasks for which TIF_FREEZE has been set are forced by the signal handler to call refrigerator(). This function unsets TIF_FREEZE, sets the PF_FROZEN flag for the current task, and puts it into the TASK_UNINTERRUPTIBLE state. The function will keep the task in this state as long as the PF_FROZEN flag is set; the PM core has to reset that flag before the task can do any more useful work. Thus, the tasks that have PF_FROZEN set and are inside the refrigerator() function are regarded as “frozen.” As a result of the way in which refrigerator() is entered, the frozen tasks cannot hold any semaphores or mutexes, so it is generally safe to leave them in this state before suspending devices. When all of the user-space tasks have been frozen, the freezer sets TIF_FREEZE for the remaining freezable tasks (i.e., freezable kernel threads). They also are supposed to enter refrigerator(). But while user-space tasks are made call refrigerator() by the generic signal-handling code, kernel threads have to call it explicitly in order to be frozen. Specifically, they must call the try_to_freeze() function in suitable places. Moreover, the freezer does not call fake_signal_wake_up() on them, since we do not want to send a fake signal to a kernel thread. Instead the freezer calls wake_up_state(p, TASK_INTERRUPTIBLE) on those tasks (where p is a pointer to the task’s struct task_struct object). This causes the tasks to be woken up in case they are sleeping—but it also means that kernel threads in the TASK_UNINTERRUPTIBLE state cannot be frozen.\footnote{This also applies to user-space processes in that state.} Although freezing tasks may seem to be a simple mechanism, it has several problems. First of all, the main limitation of the freezer (inability to handle uninterruptible tasks) causes it to fail in many cases where we would like it to succeed. For example, if there is a task waiting on a filesystem lock in the TASK_UNINTERRUPTIBLE state and the lock cannot be released for a relatively long time due to a network error, the freezer will fail and the entire suspend will fail as a result. Second, the freezer does not work well with device drivers having a user-space component, because they may not be able to suspend devices after their user-space parts have been frozen. Third, freezing tasks occasionally takes too much time. It usually does not take more than several milliseconds, but in extreme cases (i.e., under a heavy load) it may take up to several seconds, which is way too much for various important usage scenarios. Finally, the approach used by the freezer to distinguish user-space processes from kernel threads is not optimal. It turns out that there are kernel threads which in fact behave like user-space processes and therefore should be frozen in the same way, by sending fake signals to them with signal_wake_up(). These threads often fail to call refrigerator() in a timely manner, causing the freezer to fail. For these reasons, the kernel developers generally agree that the freezing of tasks should not be used during suspend. Whether it should be used during hibernation is not entirely clear, but some implementations of hibernation without freezing tasks are being actively discussed. In any case, there ought to be an alternative mechanism preventing user-space processes and kernel threads from accessing devices that are in a low-power state (i.e., after they have been suspended and before they are resumed). It is generally believed that device drivers should handle this, and for this purpose it will be necessary to rework the suspend and resume framework. 9 Proposed Framework for Suspending and Resuming Devices As stated in Section 5, the current framework for suspending and resuming devices does not seem to be adequate. It is considered inflexible and generally difficult to use in some situations. It does not include any mechanisms allowing the PM core to protect its internal data structures from damage caused by inappropriate driver implementations.\footnote{For example, if a callback or notifier routine registers a new device object below a suspended parent, the ordering of the device list used by the PM core will be incorrect and the next suspend may fail as a result.} It does not provide enough context information to resume callbacks. Finally, it may not be suitable when the freezing of tasks is removed and device drivers are made responsible for preventing access to suspended devices. Consequently a new framework for suspending and resuming devices is now being introduced [PATCHES]. The first problem solved by the new framework is the lack of separation between the suspend and hibernation callbacks, especially where the resume part is concerned. Within the current framework the same device-resume callbacks are used for both hibernation and suspend, and since they take only one argument (a pointer to the device structure), it is nearly impossible for them to determine the context in which they are invoked. This is a serious limitation leading to unnecessary complications in some cases, and it is going to be fixed by introducing separate device-resume callbacks for suspend and hibernation.\footnote{In fact, two separate device-resume callbacks are necessary for hibernation: one to be called after creating an image and one to be called during the actual resume.} Likewise, separate device-suspend callbacks for suspend and hibernation will be introduced, so that the pm_message_t argument (used for determining the type of transition being carried out, see Section 5) will not be necessary any more. \begin{verbatim} struct pm_ops { int (*prepare)(struct device *dev); void (*complete)(struct device *dev); int (*suspend)(struct device *dev); int (*resume)(struct device *dev); int (*freeze)(struct device *dev); int (*thaw)(struct device *dev); int (*poweroff)(struct device *dev); int (*restore)(struct device *dev); }; \end{verbatim} Figure 7: Proposed \texttt{struct pm_ops} \begin{verbatim} struct pm_ext_ops { struct pm_ops base; int (*suspend_noirq)(struct device *dev); int (*resume_noirq)(struct device *dev); int (*freeze_noirq)(struct device *dev); int (*thaw_noirq)(struct device *dev); int (*poweroff_noirq)(struct device *dev); int (*restore_noirq)(struct device *dev); }; \end{verbatim} Figure 8: Proposed \texttt{struct pm_ext_ops} bus type, device type, and device class their devices belong to, it is strongly recommended to use \texttt{struct pm_ops} or \texttt{struct pm_ext_ops} objects. However, the legacy callback method pointers will remain available for the time being. The majority of callbacks provided by \texttt{struct pm_ops} and \texttt{struct pm_ext_ops} objects are hibernation-specific and we will not discuss them. We will focus on the callbacks that are STR-specific or common to both suspend and hibernation. The \texttt{.prepare()} callback is intended for initial preparation of the driver for a power transition, without changing the hardware state of the device. Among other things, \texttt{.prepare()} should ensure that after it returns, no new children will be registered below the device. (Un-registering children is allowed at any time.) It is also recommended that \texttt{.prepare()} take steps to prevent potential race conditions between the suspend thread and any other threads. The \texttt{.prepare()} callbacks will be executed by the PM core for all devices before the \texttt{.suspend()} callback is invoked for any of them, so device drivers may generally assume that the other devices are functional while \texttt{.prepare()} is being run.\footnote{However, user-space tasks will already be frozen, meaning that things like \texttt{request_firmware()} cannot be used. This limitation may be lifted in the future.} In particular, GFP_KERNEL memory allocations can safely be made. The \texttt{.prepare()} callbacks will be executed during suspend as well as during hibernation.\footnote{During hibernation they will be executed before the image is created, and during resume from hibernation they will be executed before the contents of system memory are restored from the image.} The \texttt{.suspend()} callback is suspend-specific. It will be executed before the platform \texttt{.prepare()} method is called (see Section 5) and the non-boot CPUs are disabled. In this callback the device should be put into the appropriate low-power state and the device’s wake-up mechanism should be enabled if necessary. Tasks must be prevented from accessing the device after \texttt{suspend()} has run; attempts to do so must block until \texttt{resume()} is called. Some drivers will need to implement the \texttt{suspend_noirq()} callback and its resume counterpart, \texttt{resume_noirq()}. The role of these callbacks is to switch off and on, respectively, devices that are necessary for executing the platform methods \texttt{prepare()} and \texttt{finish()} or for disabling and re-enabling the non-boot CPUs. They should also be used for devices that cannot be suspended with interrupts enabled, such as APICs.\footnote{At present, APICs are represented by \textit{sysdev} objects and are suspended after the regular devices. It is possible, however, that they will be represented by platform device objects in the future.} The \texttt{resume()} callback is the counterpart of \texttt{suspend()}. It should put the device back into an operational state, according to the information saved in memory by the preceding \texttt{suspend()}. After \texttt{resume()} has run, the device driver starts working again, responding to hardware events and software requests. The role of \texttt{complete()} is to undo the changes made by the preceding \texttt{prepare()}. In particular, new child devices that were plugged in while the system was suspended and detected during \texttt{resume()} should not be registered until \texttt{complete()} is called. It will be executed for all kinds of resume transitions, including resume-from-hibernation, as well as in cases when a suspend or hibernation transition fails. After \texttt{complete()} has run, the device is regarded as fully functional by the PM core and its driver should handle all requests as appropriate. The \texttt{complete()} callbacks for all devices will be executed after the last \texttt{resume()} callback has returned, so drivers may generally assume the other devices to be functional while \texttt{complete()} is being executed. All of the callbacks described above except for \texttt{complete()} return zero on success or a nonzero error code on failure. If \texttt{prepare()}, \texttt{suspend()}, or \texttt{suspend_noirq()} returns an error code, the entire transition will be aborted. However the PM core is not able to handle errors returned by \texttt{resume()} or \texttt{resume_noirq()} in any meaningful manner, so they will only be printed to the system logs.\footnote{To change this, the resume callbacks would have to be required to return error codes only in case of a critical failure. This currently is not possible, since some drivers return noncritical errors from their legacy resume callbacks. In any event, drivers have a better idea of what recovery options are feasible than the PM core does.} It is expected that the addition of the \texttt{prepare()} and \texttt{complete()} callbacks will improve the flexibility of the suspend and resume framework. Most importantly, these callbacks will make it possible to separate preliminary actions that may depend on the other devices being accessible from the actions needed to stop the device and put it into a low-power state. They will also help to avoid some synchronization-related problems that can arise when the freezing of tasks is removed from the suspend code path. For example, drivers may use \texttt{prepare()} to disable their user-space interfaces, such as \texttt{ioctls} and \texttt{sysfs} attributes, or put them into a degraded mode of operation, so that processes accessing the device cannot disturb the suspend thread. Moreover, we expect that the introduction of the hibernation-specific callbacks and the elimination of the \texttt{pm_message_t} parameter will help driver authors to write more efficient power-management code. Since all of the callbacks related to suspend and hibernation are now going to be more specialized and the context in which they are invoked is going to be clearly defined, it should be easier to decide what operations are to be performed by a given callback and to avoid doing unnecessary things (such as putting a device into a low-power state before the hibernation image is created). \section{Suspend to RAM and Graphics Adapters} One of the most visible weaknesses of Linux’s current implementation of suspend-to-RAM is the handling of graphics adapters. On many systems, after resume-from-RAM, the computer’s graphics adapter is not functional or does not behave correctly. In the most extreme cases this may lead to system hangs during resume and to the appearance of many unusual failure modes. It is related to the fact that the way Linux handles graphics does not meet the expectations of hardware manufacturers.\footnote{This mostly applies to the vendors of notebooks.} For a long time graphics has been handled entirely by the X server, which from the kernel’s point of view is... simply a user-space process. Usually the X server uses its own graphics driver and accesses the registers of the graphics adapter directly. In such cases the kernel does not need to provide its own driver as well, and the X server is left in control. Normally this does not lead to any problems, but unfortunately with suspend-to-RAM it does. When the system is put into STR, power is usually removed from the graphics adapter, causing it to forget its pre-suspend settings. Hence during the subsequent resume, it is necessary to reinitialize the graphics adapter so that it can operate normally. This may be done by the computer’s BIOS (which gets control over the system when a wake-up event occurs) and it often is done that way on desktop systems. However, many laptop vendors tend to simplify their BIOSes by not implementing this reinitialization, because they expect the graphics driver provided by the operating system to take care of it. Of course this is not going to work on Linux systems where the kernel does not provide a graphics driver, because the X server is activated after devices have been resumed and that may be too late for it to reinitialize the graphics adapter, let alone restore its pre-suspend state. Furthermore X may not even have been running when the system was suspended. The ultimate solution to this problem is to implement graphics drivers in the Linux kernel. At a minimum, graphics drivers should be split into two parts, one of which will reside in the kernel and will be responsible for interacting with the other kernel subsystems and for handling events such as a system-wide power transition. The other part of the driver may still live in the X server and may communicate with the first part via a well-defined interface. Although this idea is not new, it was difficult to realize owing to the lack of documentation for the most popular graphics adapters. Recently this situation has started to change, with first Intel and then AMD making their adapters’ technical documentation available to kernel and X developers. As a result, .suspend() and .resume() callbacks have been implemented in the i915 driver for Intel adapters, and it is now supposed to correctly reinitialize the adapter during resume-from-RAM. It is expected that this ability will also be added to the graphics drivers for AMD/ATI adapters in the near future. Still, there are many systems for which the graphics adapters are not reinitialized correctly during resume-from-RAM. Fortunately it was observed that the reinitialization could often be handled by a user-space wrapper executing some special BIOS code in 16-bit emulation mode. It turned out that even more things could be done in user-space to bring graphics adapters back to life during resume, and a utility called vbetool was created for this purpose. At the same time, a utility for manipulating backlight on systems using ATI graphics adapters, called radeon-tool, was created. These two programs were merged into a single utility called s2ram, which is a wrapper around the Linux kernel’s /sys/power/state interface incorporating the graphics reinitialization schemes. Although s2ram is a very useful tool, it has one drawback: Different systems usually require different operations to be carried out in order to reinitialize their graphics adapters, and it is necessary to instruct s2ram what to do by means of command-line options. Moreover, every user has to figure out which options will work on her or his system, which often is tedious and can involve several failed suspend/resume cycles. For this reason s2ram contains a list of systems for which a working set of options is known, and the users of these systems should be able to suspend and resume their computers successfully using s2ram without any additional effort. 11 Problems with Platforms The flexibility given to platform designers by ACPI can be a source of serious problems. For example, if the ACPI Machine Language (AML) routines invoked before suspend are not implemented correctly or make unreasonable assumptions, their execution may fail and leave the system in an inconsistent state. Unfortunately the kernel has no choice but to execute the AML code, trusting that it will not do any harm. Of course if given platform is known to have problems, it can be blacklistlisted on the basis of its DMI identification. Still, be- \[20\] vbetool was written by Matthew Garrett. \[21\] ATI was not a part of AMD at that time. \[22\] The creator of s2ram is Pavel Machek, but many people have contributed to it since the first version was put together. s2ram is available from http://suspend.sf.net. \[23\] s2ram matches computers against its list of known working systems based on the DMI information in the BIOS. The list is built from feedback provided by the s2ram users and is maintained by Stefan Seyfried. \[24\] Desktop Management Interface. fore blacklisting a system, the kernel developers have to know what kind of problems it experiences and what exactly to do to prevent them from happening. That, in turn, requires the users of those systems to report the problems and to take part in finding appropriate workarounds. Moreover, problems may arise even if there is nothing wrong with the AML code. This happens, for example, if the kernel provides native drivers for devices that are also accessed from the AML routines, because the kernel has no means to determine which registers of a given device will be accessed by the AML code before actually executing that code. Thus, if the native driver accesses the device concurrently with an AML routine, some synchronization issues are likely to appear. It is difficult to solve those issues in a way general enough to be applicable to all systems and, again, blacklisting is necessary to make things work on the affected platforms. Another major inconvenience related to ACPI platforms is that the requirements regarding STR changed between revisions of the ACPI specification. The suggested code ordering for suspend changed between ACPI 1.0 and ACPI 2.0, and there are systems for which only one of them is appropriate. Some of these systems fail to suspend or even crash if the code ordering implemented by the kernel is not the one they expect. Again, the kernel has no choice but to use one suspend code ordering by default and blacklist the systems that require the other one.\footnote{At present, the ACPI 2.0 ordering is used by default.} Last but not least, testing the interactions between the kernel and a particular platform is problematic because it can be carried out on only a limited number of systems. Even if the kernel follows the ACPI specification and works on the systems available to its developers, it may very well fail to work on other systems having different implementations of the AML code in question. For this reason, it is very important that the users of STR test development kernels and immediately report any new STR-related problems, so that the developers can investigate and fix them before the new code is officially released. ## 12 Future Work We’ve reached a level of stability where STR is useful on a large number of systems. We need to continue these stability efforts with the goal of broad, and ultimately universal, deployment. But to make STR even more useful, we need to increase our focus on performance. We need tools to track STR performance such that performance is easily and widely measured, issues are easily identified, regressions are prevented, and benefits of tuning are permanent. Linux needs a stable user/system API for device D-state control. D-states should be widely available to run-time device power management, which must inter-operate well with system sleep states. (Some progress in this area has already been made; a few subsystems, such as USB, can power down devices when they are not in active use.) The wake-up API available in \texttt{sysfs} needs to be more widely used and better integrated with the platform wake-up mechanisms. It is unclear that the existing CPU hot-plug infrastructure is ideally suited to system suspend, and alternatives should be sought. We need to think about the work required for device drivers to properly implement suspend and make sure that the burden on driver authors is minimized. This applies to the API seen by individual device drivers as well as to the infrastructure provided by the upper-level device-class drivers. ## 13 How to Participate STR in Linux is approaching a point where a critical mass of developers routinely use it, and thus test it. These developers often run the development and release-candidate kernels and thus immediately notice and report regressions. With tools such as \texttt{git-bisect} [\texttt{GIT-OLS}, \texttt{GIT-URL}], these developers are empowered to do an excellent job isolating issues, even if they never read or modify a single line of suspend-related code. Please join them! For STR on Linux to reach the next level of success, it is critical that the Linux community assert that STR work properly on the systems that they have, and actively file bugs and help isolate regressions when STR does not meet expectations. The more active testers we have, the easier it will be for the community to successfully deploy STR on a broad range of systems. Also, note that there is now a dedicated kernel Bugzilla category for STR related issues: http://bugzilla.kernel.org, product Power Management, and Component Hibernation/Suspend. Still another group in the community must be mobilized—driver authors. At this point, drivers are expected to include full suspend/resume support if they are used on systems that support suspend. A new driver should not be considered complete enough for inclusion in the kernel if it does not include suspend support. 14 Acknowledgments Linux’s suspend-to-RAM support is not a new idea; it has been evolving for years. We must all acknowledge that we are standing on the shoulders of the giants who came before us, and thank them for all they have done. In particular, the authors would like to single out Pavel Machek of Novell/SuSE, co-maintainer of suspend and hibernation, who has been key to development and adoption. Also Andy Grover and Patrick Mochel, who set a lot of the foundation starting way back in Linux-2.4. We thank Alan Stern for many valuable suggestions and for helping us to prepare the manuscript. We also thank Pavel Machek for valuable comments. Finally, we thank the communities on the mailing lists linux-pm@lists.linux-foundation.org and linux-acpi@vger.kernel.org, where the actual work gets done. References [GIT-URL] GIT project home page (http://git.or.cz).
{"Source-Url": "https://pages.cs.wisc.edu/~kadav/asplos13/pdfs/brown-reprint.pdf", "len_cl100k_base": 10938, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 45432, "total-output-tokens": 12013, "length": "2e13", "weborganizer": {"__label__adult": 0.00043892860412597656, "__label__art_design": 0.0005598068237304688, "__label__crime_law": 0.0002689361572265625, "__label__education_jobs": 0.0008611679077148438, "__label__entertainment": 0.00016570091247558594, "__label__fashion_beauty": 0.0001971721649169922, "__label__finance_business": 0.00034427642822265625, "__label__food_dining": 0.0003681182861328125, "__label__games": 0.0014123916625976562, "__label__hardware": 0.020477294921875, "__label__health": 0.00048470497131347656, "__label__history": 0.0004744529724121094, "__label__home_hobbies": 0.00015783309936523438, "__label__industrial": 0.0007872581481933594, "__label__literature": 0.000293731689453125, "__label__politics": 0.0003025531768798828, "__label__religion": 0.0006241798400878906, "__label__science_tech": 0.217041015625, "__label__social_life": 7.814168930053711e-05, "__label__software": 0.03302001953125, "__label__software_dev": 0.72021484375, "__label__sports_fitness": 0.0002849102020263672, "__label__transportation": 0.00087738037109375, "__label__travel": 0.0002338886260986328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53221, 0.00536]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53221, 0.45584]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53221, 0.93615]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 712, false], [712, 2632, null], [2632, 6102, null], [6102, 9411, null], [9411, 14135, null], [14135, 16429, null], [16429, 21267, null], [21267, 23596, null], [23596, 27743, null], [27743, 32586, null], [32586, 36647, null], [36647, 41621, null], [41621, 46531, null], [46531, 50958, null], [50958, 53221, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 712, true], [712, 2632, null], [2632, 6102, null], [6102, 9411, null], [9411, 14135, null], [14135, 16429, null], [16429, 21267, null], [21267, 23596, null], [23596, 27743, null], [27743, 32586, null], [32586, 36647, null], [36647, 41621, null], [41621, 46531, null], [46531, 50958, null], [50958, 53221, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53221, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53221, null]], "pdf_page_numbers": [[0, 0, 1], [0, 712, 2], [712, 2632, 3], [2632, 6102, 4], [6102, 9411, 5], [9411, 14135, 6], [14135, 16429, 7], [16429, 21267, 8], [21267, 23596, 9], [23596, 27743, 10], [27743, 32586, 11], [32586, 36647, 12], [36647, 41621, 13], [41621, 46531, 14], [46531, 50958, 15], [50958, 53221, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53221, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
3c01c1b2d2a5001c5fee542d410c452fb8ead309
A T2 Graph-Reduction Approach To Fusion Troels Henriksen, Cosmin E. Oancea HIPERFIT, Department of Computer Science, University of Copenhagen (DIKU) athas@sigkill.dk, cosmin.oancea@diku.dk Abstract Fusion is one of the most important code transformations as it has the potential to substantially optimize both the memory hierarchy time overhead and, sometimes asymptotically, the space requirement. In functional languages, fusion is naturally and relatively easily derived as a producer-consumer relation between program constructs that expose a richer, higher-order algebra of program invariants, such as the map-reduce list homomorphisms. In imperative languages, fusing producer-consumer loops requires dependency analysis on arrays applied at loop-nest level. Such analysis, however, has often been labeled as “heroic effort” and, if at all, is supported only in its simplest and most conservative form in industrial compilers. Related implementations in the functional context typically apply fusion only when the to-be-fused producer is used exactly once, i.e., in the consumer. This guarantees that the transformation is conservative: the resulting program does not duplicate computation. We show that the above restriction is more conservative than needed, and present a structural-analysis technique, inspired from the L\textsubscript{0}'s semantics and the compiler infrastructure on which the fusion transformation relies, and present compiler-generated statistics related to fusion on a set of six benchmarks. Categories and Subject Descriptors D.1.3 [Concurrent Programming]: Parallel Programming; D.3.4 [Processors]; Compiler General Terms Performance, Design, Algorithms Keywords fusion, autoparallelization, functional language 1. Introduction One of the main goals of the HIPERFIT project has been to develop the infrastructure necessary to write real-world, big-data financial applications in a hardware-independent language that can be efficiently executed on massively parallel hardware, e.g., GPGPU. In this sense we have examined several such computational kernels [26], originally implemented in languages such as OCaml, Python, C++, C and measuring in the range of hundreds of lines of compact code, with two main objectives in mind: 1. What should be a suitable core language that, on the one hand, would allow a relatively straight-forward code translation, and, on the other hand, would preserve the algorithmic invariants that are needed to optimize the application globally? 2. What compiler optimizations would result in efficiency comparable to code hand-tuned for the specific hardware? The answer to the first question has been a functional language, dubbed L\textsubscript{0}, supporting map-reduce nested parallelism on regular arrays, i.e., each row of the array has the same size: It is regular because our suite does not require irregular arrays in the sense of NESL or DPH [9, 13], and regular arrays are more amenable to compiler optimizations, e.g., they allow transposition and simplified size analysis. It is nested because our suite exhibits several layers of parallelism that cannot be exploited by flat parallelism in the style of REPA [22], e.g., several innermost scan or reduce operations and at least one (semantically) sequential loop per benchmark. Finally, it is functional because we would rather invest compiler effort in exploiting high-level program invariants rather than in proving them. The common example here is parallelism: map-reduce constructs are inherently parallel, while Fortran-style do loops require sophisticated analysis to decide parallelism. Furthermore, such analyses [12, 19, 27, 28] have not yet been integrated in the repertoire of commercial compilers, likely due to “heroic effort” concerns, albeit (i) their effectiveness was demonstrated on comprehensive suites, and (ii) some of them were developed more than a decade ago. Perhaps less expectedly, the answer to the second question seems to be that a common ground needs to be found between functional and imperative optimizations and, to a lesser extent, between language constructs. Much in the same way in which (data) parallelism seems to be generated by a combination of map, reduce, and scan operations, the optimization opportunities, e.g., enhancing the degree of parallelism and reducing the memory time and space overheads, seem solvable via a combination of fusion, transposition, loop interchange and distribution [2]. It follows that loops are necessary in the intermediate representation, regardless of whether they are provided as a language construct or are derived from tail-recursive functions via a code transformation. Finally, an indirect consequence of having to deal with sequential (dependent) loops is that L\textsubscript{0} provides support for “in-place updates” of array elements. The semantics is the functional one, i.e., deep copy of the original array but with the corresponding element replaced, intersected with the imperative one, i.e., if aliasing may prevent an in-place implementation a compile-time error is signaled. This approach enables the intuitive (optimized) cost model that the user likely assumes, while preserving the functional semantics. Section 2 provides an overview of the L\textsubscript{0} language and of the enabling optimizations that set the stage for fusion. 1.1 Fusion and Principal Contributions The simplest form of fusion is the one that relies on the invariant that mapping each element of a list by a function \( f \), and then mapping each element of the result with a function \( g \) is the same as mapping each element of the original list with the composition of \( g \) with \( f \). In Haskell-like notation this can be written as: \[ \text{map } g . \text{map } f \equiv \text{map } (g \circ f) \] where the semantics of \( \text{map} \) is \( \text{map } f \left[a_1, \ldots, a_n\right] = \left[f a_1, \ldots, f a_n\right] \). One can already observe that fusion may significantly optimise computation since it may replace some of the array accesses with scalar accesses, e.g., if the element type is a basic type. Furthermore, defining \( \text{reduce} \) as: \( \text{reduce} \circ e \left[a_1, \ldots, a_n\right] \equiv e[a_1] \oplus \ldots \oplus a_n \), it is possible to fuse a \( \text{reduce} \) with a \( \text{map} \) as \[ \text{reduce} \circ e . \text{map } f \equiv \text{reduce} \circ e . \text{map } (\text{reduce} \circ e . \text{map } f) \] \( \text{split}, \text{p}, \) i.e., the list is distributed between \( \text{P} \) processors, each processor executes sequentially its chunk, and at the end the result is reduced (in parallel) across processors [7]. One may observe that this fusion may asymptotically improve the space requirement, e.g., if the input array is the iteration space. The operator \( \oplus \) must be associative with identity \( e \). Another observation is that both \( \text{map} \) and \( \text{reduce} \) have well-known efficient parallel implementations and that fusion preserves parallelism. While in a functional language the (richer) semantics of operators such as \( \text{map}, \text{reduce} \) makes fusion easy to understand and to implement by the compiler, this is not the case in an imperative context. For example, fusing two parallel loops requires to prove that the set of array indices written by iteration \( i \) of the producer loop, denoted \( W_i \), is a superset of the set of indices read by the consumer loop, denoted \( R_i \), and also that \( W_i \cap R_i = \emptyset, \forall i \neq j \). This analysis quickly requires “heroic effort” when the loops’ bodies exhibit non-affine indexing or non-trivial control flow, e.g., loop nests. Current fusion work in the functional context takes two main directions: One approach is to perform fusion aggressively, i.e., even when it duplicates computation, and to provide the user with primitives that inhibit fusion [15, 22]. The other approach performs fusion conservatively by means of rewrite rules or skeletons [6, 13, 21]. Since the latter relies tightly on the inlining engine, e.g., to create exploitable patterns, these approaches typically do not fuse an array that is used multiple times. Main contributions of this paper are: - A structural analysis technique that succeeds to conservatively fuse, i.e., without duplicating computation, even when an array has multiple uses, if the dependency graph of the producer-consumer array combinator is reducible (via \( \text{T}_2 \) reduction). The compositional algebra for fusion includes the \( \text{map}, \text{reduce} \) and \( \text{filter} \) operators. These are presented in Section 3. - An analysis refinement that enables fusion across (otherwise) “inhibitor” built-in functions such as \( \text{size}, \text{split}, \text{transpose} \). - Two other transformations (not implemented yet) that enable and are enabled by fusion and that optimize the use of \( \text{scan} \) and \( \text{reduce} \). For example, SWIM interchanges the outer-\( \text{scan} \) with an inner \( \text{map} \) nest, and thus enhances the (exploitable) parallelism degree of the program. These are discussed in Section 4. Finally, Section 5 compares with related work and Section 6 concludes the paper. 2. Preliminaries: \( L_0 \) and Enabling Optimizations \( L_0 \) is a mostly-monomorphic, statically typed, strictly evaluated, purely functional language. Although some built-in higher-order functions provide polymorphism, user-defined functions are monomorphic, and their return- and parameter types must be specified explicitly. For brevity, we will not cover the typing rules in detail, as they are quite trivial. The only exception is our notion of \textit{uniqueness types}, which is given a cursory treatment in Section 2.2. \[ t := \begin{align*} \text{int} & \quad (\text{Integers}) \\ \text{real} & \quad (\text{Floats}) \\ \text{bool} & \quad (\text{Booleans}) \\ \text{char} & \quad (\text{Characters}) \\ \{t_1, \ldots, t_n\} & \quad (\text{Tuples}) \\ [t] & \quad (\text{Arrays}) \end{align*} \] \[ v := k \quad (\text{Integer}) \\ \alpha \quad (\text{Name}) \\ \{e_1, \ldots, e_n\} \quad (\text{Tuple expression}) \\ \{e_1, \ldots, e_n\} \quad (\text{Array expression}) \\ e_1 \oplus e_2 \quad (\text{Binary operator}) \\ \sim e \quad (\text{Prefix minus}) \\ \not e \quad (\text{Logical negation}) \\ \text{if } e_1 \text{ then } e_2 \text{ else } e_3 \quad (\text{Branching}) \\ \text{id}[e_1, \ldots, e_n] \quad (\text{Indexing}) \\ \text{id}(e_1, \ldots, e_n) \quad (\text{Function call}) \] \[ p := \begin{align*} \text{id} & \quad (\text{Name pattern}) \\ \langle p_1, \ldots, p_n \rangle & \quad (\text{Tuple pattern}) \end{align*} \] \[ e := \begin{align*} v & \quad (\text{Constant}) \\ \text{id} & \quad (\text{Variable}) \\ \{e_1, \ldots, e_n\} & \quad (\text{Tuple expression}) \\ \{e_1, \ldots, e_n\} & \quad (\text{Array expression}) \\ e_1 \oplus e_2 & \quad (\text{Binary operator}) \\ \sim e & \quad (\text{Prefix minus}) \\ \not e & \quad (\text{Logical negation}) \\ \text{if } e_1 \text{ then } e_2 \text{ else } e_3 & \quad (\text{Branching}) \\ \text{id}[e_1, \ldots, e_n] & \quad (\text{Indexing}) \\ \text{id}(e_1, \ldots, e_n) & \quad (\text{Function call}) \end{align*} \] \[ \text{fun} := \text{fun } t \text{id}(t_1 \text{id}_1, \ldots, t_n \text{id}_n) = e \] \[ \text{prog} := e \quad \text{fun } \text{prog} \] \text{Figure 1.} \( L_0 \) syntax The syntax of the first-order fragment of \( L_0 \) can be seen in Figure 1. Common arithmetic operators such as \( + \) and \( / \) are supported in the usual way. Note that they are polymorphic in the sense that they accept both integers and floating-points, although both operands must be of the same type. Pattern matching is supported in a limited way as the only way of decomposing tuple values, but there is no case construct. Note that braces denote arrays, not sets. \( \text{zip} \) and \( \text{unzip} \) behave as usual, i.e. \( \text{zip}(\{1,2,3\},\{4,5,6\}) = \{(1,4),(2,5),(3,6)\} \), but the semantics of \( \text{zip} \) requires that the input arrays have outermost dimensions of equal sizes. Otherwise a compile or runtime error is signalled. There are two non-standard constructs in the first-order part of \( L_0 \). The first is the let-with construct for updating parts of arrays: \[ \text{let } b = a \text{ with } [i_1, \ldots, i_k] \leftarrow v \text{ in body} \] The above evaluates body with b bound to the value of a, except that the element at position \((i_1, \ldots, i_k)\) is updated to the value of v. If \(a[i_1, \ldots, i_k]\) is itself an array, i.e., if a has more than k dimensions, v must be an array of the same size as the slice it is replacing. We write \(a[i_1, \ldots, i_k] = v\) in body when the source and destination array share the same name. Section 2.2 describes our method for doing array updates in-place without losing referential transparency. This allows a cost model in which the update takes time proportional to the total size of v, rather than a. The other non-standard construct is the do-loop: It is essentially syntactic sugar for a certain form of tail-recursive function, and is used by the user to express certain sequential computation that would be awkward to write functionally, and by the compiler in lower-level optimisations, e.g., loop interchange, distribution. For example, denoting by t the type of x, the loop in Figure 2 has the semantics of a call to the tail-recursive function on the right side. ### 2.1 Second-order array combinators Most array operations in \(L_0\) are achieved through built-in second-order array combinators (SOACs). The available SOACs can be seen in Figure 3, along with our syntax for anonymous and curried functions. As \(L_0\) is first-order, these anonymous functions are only syntactically permitted in SOAC invocations. The parentheses may be omitted when currying if no curried arguments are given. The semantics of the SOACs is identical to the similarly-named higher-order functions found in many functional languages, but we reproduce it here for completeness. Note that the types given are not \(L_0\) types, but a Haskell-inspired notation, since the SOACs cannot be typed in \(L_0\) itself. \[ \begin{align*} \text{fun } f & (\text{int } i, \text{ int } n, t x) = \\ \text{if } i > n \text{ then } x & \text{ else } f(i+1, n, g(x)) \\ \text{let } x & = f(i, n, a) \\ \end{align*} \] ### Figure 2. Loop to recursive function \[ l ::= \text{fn } t \ (t_1 \ \text{id}_1, \ldots, t_n \ \text{id}_n) \quad \text{(Anonymous function)} \] \[ \Rightarrow e \] \[ \begin{align*} id & \ (e_1, \ldots, e_n) \quad \text{(Curried function)} \\ op \odot & \ (e_1, \ldots, e_n) \quad \text{(Curried operator)} \end{align*} \] \[ e ::= \text{map}(l, e) \\ \text{filter}(l, e) \\ \text{reduce}(l, x, e) \\ \text{redomap}(l, l_m, x, e) \] ### Figure 3. Second-order array combinators In particular, note that \(\text{scan}\) and \(\text{reduce}\) require binary associative operators and \(\text{scan}\) is an inclusive prefix scan. \(\text{redomap}\) is a special case – it is not part of the external \(L_0\) language, but used internally for fusing \(\text{reduce}\) and \(\text{map}\). Its semantics is as follows. \[ \text{redomap} :: (\langle \mathcal{A} \rightarrow \mathcal{B} \rightarrow \mathcal{C} \rangle \rightarrow \mathcal{D}) \\ \rightarrow (\langle \mathcal{A} \rightarrow \mathcal{B} \rightarrow \mathcal{C} \rangle \rightarrow \mathcal{D}) \\ \text{redomap}(\circ, \alpha, x, v) \equiv \text{foldl}(g, x, v) \] Note that the runtime semantics is a left-fold, not a normal \(L_0\) reduce. We use a Haskell-like syntax to explain the rationale behind \(\text{redomap}\): (\(\text{red} \circ e\)). \(\text{map f}\) can be formally transformed, via the list homomorphism (L1) promotion lemma [7], to an equivalent form: \[ \text{red} \circ e \text{. map f } \equiv \text{red} \circ e \text{. map (red} \circ e \text{. map f). split}, \] where the original list is distributed to \(p\) parallel processors, each of which execute the original map-reduce computation sequentially and, at the end, reduce in parallel the per-processor result. Hence the \(\text{inner}\) map-reduce can be rewritten as a left-fold: \[ \text{red} \circ e \text{. map f } \equiv \text{red} \circ e \text{. map (splitl g e). split}, \] It follows that in order to generate parallel code for \((\text{red} \circ e)\) \(\cdot\) (\(\text{map f}\)) we need to record either \(\circ\) and \(f\), or \(\circ\) and \(g\). We chose the latter, i.e., \(\text{redomap}(\odot, \alpha, e)\), because it allows a richer compositional algebra for fusion. (In particular, it allows to fuse \(\text{reduce} \circ \text{map} \circ \text{filter}\) into a \(\text{redomap}\) without duplicating computation, see Figure 4 in Section 10.) #### 2.1.1 Tuple shimming As a notational convenience, \(L_0\) will automatically unwrap tuples passed to functions in SOACs. Precisely, if a function expects arguments \((t_1, \ldots, t_n)\), and is called with a single argument of type \((t_1, \ldots, t_n)\) (that is, a tuple containing the exact same types as expected by the function), \(L_0\) will automatically rewrite the function to expect a tuple, and insert the code necessary to extract the components. This permits us to write \(\text{map (op +, zip(xs,ys))}\), rather than the following more cumbersome code. \[ \text{map(fn int ((int, int) a) => let } (x,y) = a \text{ in x+y, zip(xs,ys))} \] We will make use of this shortcut in this paper. #### 2.2 Safe in-place updates When writing sequential loops, it is often very convenient to update the elements of an array in-place. However, in order to perform such an update without violating referential transparency, we must be able to guarantee that no other array that is used on any execution path following the update shares data with the updated array. To perform this check, \(L_0\) uses an extension to the type system in the form of \(\text{uniqueness attributes}\) inspired by Clean [4] and Rust [20], as well as aliasing analysis. We extend the syntax for array types to permit a prefix asterisk, as in Figure 4, denoting a unique array. If a type is of the form \(*\tau\), we say that it is a unique type. The semantics of uniqueness attributes are as follows. Inside a function, a parameter having type \(*\tau\) means that neither the argument value nor any of its aliases are going to be used after the function returns, implying that the function body can modify the argument freely. Furthermore, if the return type is unique, its value must not alias any non-unique arguments. The intuition is that the function can be considered to have exclusive access to its unique argument, and a caller to have exclusive access to a unique return value. In a function call, a parameter having type $\ast \beta$ means that whatever argument is passed must be modifiable (that is, it must not be aliased with a non-modifiable function argument), and neither it nor any of its aliases may be used in any way after the function call. We say that it has been consumed. As a concrete example, a function using sequential loops and in-place modification to compute the LU-factorisation of an array can be written like this. Note that the two inner loops could be written as maps, but have been left sequential for expository purposes. ```plaintext fun \( \ast \{[\text{real}]\}, \ast \{[\text{real}]\} \) lu_inplace(\( \ast \{[\text{real}]\} \) a) = let n = size(a) in loop ((a,1,u)) = (a, replicate(n,replicate(n,0.0))) = for k < n do let u[k,k] = a[k,k] in loop ((1,u)) = for i < n-k do let 1[i+k,k] = a[i+k,k]/u[k,k] in let u[k,i+k] = a[k,i+k] in (1,u) in loop (a) = for i < n-k do let a[i+k,j+k] = a[i+k,j+k] + 1[i+k,k] * u[k,j+k] in a in (a,1,u) in (1,u) ``` After an array has been used on the right-hand side of a let-with, we mark it and all of its aliases as consumed, and it may not be used afterwards. Our aliasing analysis is rather conservative. In particular, we assume that if a function returns a non-unique array, then that array may alias any of the function arguments. We also do not detect aliasing at a finer granularity than whole arrays, i.e., after let $a = b[0]$, a aliases all of $b$, not only its first row. ### 2.3 Compiler pipeline The compilation pipeline in the current $L_0$ compiler is outlined in Figure 5. Type checking is done on the original program, to ensure that any error messages refer to the names written by the programmer, but all subsequent stages consume and produce programs in which names are distinct. To begin with, we run a transformation that converts most uses of tuples to a simpler form, described in more detail in Section 2.3.1. After this comes let- and tuple-normalisation, where the program is transformed in such a way that the only direct operands to functions, SOACS and operators are variables, and that every tuple-pattern is fully expanded to cover all elements of the tuple value to be matched. One notable property of the resulting program is that no variable is ever bound to a tuple. The enabling optimisations loop consists of: 1. aggressive inlining, i.e., building the program call-graph and inlining leaves to a fix point (inlines all non-recursive functions), 2. performing copy/constant propagation and constant folding, 3. dead code and function elimination, which is repeated until a fixed point is reached. We also plan to add common-subexpression elimination and loop hoisting. After loop fusion, the enabling optimisations stage is repeated, as fusion will often produce code amenable to copy-propagation. By the time the program enters the enabling optimisations loop, and for the rest of the compilation pipeline, the following program properties hold: - No tuple type can appear in an array or tuple type, i.e., flat tuples, - unzip has been eliminated, zip has been replaced with $\text{assertZip}$, which verifies either statically or at runtime that the outer size of zip’s input matches, and finally, the original SOACs (map) have been replaced with their tuple-of-array version (map2, see Section 2.3.1). - tuple expressions can appear only as the final result of a function, SOAC, or if expression, and similarly for the tuple pattern of a let binding, e.g., a formal argument cannot be a tuple. - $e_1$ cannot be a let expression when used in let $p = e_1$ in $e_2$. - each if is bound to a corresponding let expression, and an if’s condition cannot be in itself an if expression, e.g., $a + if(\text{if } c_1 \text{ then } c_2 \text{ else } c_2) \text{ then } c_3 \text{ else } c_4 \Rightarrow$ let $c_2 = if(\text{if } c_1 \text{ then } c_2 \text{ else } c_2) \text{ then } c_1 \text{ else } c_2 \text{ in}$ - let $b = if(\text{if } c_1 \text{ then } c_2 \text{ else } c_4) \text{ else } c_4 \text{ in } a+b$ - function calls, including SOACS, have their own let binding, e.g., $\text{reduce2}(f,a) + x \Rightarrow \text{let } y = \text{reduce2}(f,a) \text{ in } yx$. - all actual arguments are vars, e.g., $f(a+b) \Rightarrow \text{let } x = a+b \text{ in } f(x)$. The first three properties are ensured by the tuple transformation step, while the latter three are due to normalisation. 2.3.1 Tuple transformation As mentioned above, the tuple-transformation stage flattens all tuples (i.e., $(x, (y, z))$ becomes $(x, y, z))$, and converts arrays of tuples to tuples of arrays. This transformation was first developed in the context of NESL [11]. Arrays of tuples are in a sense merely syntactic sugar for tuples of arrays; the type $\{[\text{int}], [\text{real}]\}$ is transformed to $\{\{\text{int}\}, \{\text{real}\}\}$ during the compilation process, and all code interacting with arrays of tuples is likewise transformed. In most cases, this is fully transparent, but there are edge cases where the transformation is not an isomorphism. Consider the type $\{(\{\text{int}\}, \{\text{real}\})\}$, which is transformed to $\{(\text{int}), \{\text{real}\}\}$. These two types are not isomorphic, as the latter has more stringent demands to the regularity of arrays. For example, $\{(\{1\}, \{1.0\})\}$, $\{(2,3), \{2.0\}\}$ is a value of the former, but the first element of the corresponding transformed tuple $\{(\{1\}, \{2,3\}), \{(1.0), \{2.0\}\}\}$ is not a regular array. Hence, when determining whether a program generates regular arrays, we must look at the transformed values - in a sense, the regularity requirement “transcends” the tuples. Also, after tuple-transformation, zip and unzip disappear. After tuple transformation, the previously described SOACS are no longer usable, as they each accept only a single input array. Hence, we introduce matching tuple-SOACS, which accept as input an arbitrary number of arrays, and likewise their result is a tuple. Their syntax is depicted in Figure 6, and their semantics is: \[ \begin{align*} \text{map2} &:: (\alpha_1 \rightarrow \ldots \rightarrow \alpha_n) \rightarrow ([\beta_1, \ldots, \beta_m]) \\ &\rightarrow ([\alpha_1] \rightarrow \ldots \rightarrow [\alpha_n] \rightarrow ([\beta_1], \ldots, [\beta_m])) \\ \text{reduce2} &:: (\alpha_1 \rightarrow \ldots \rightarrow \alpha_n \rightarrow \alpha_1 \rightarrow \ldots \rightarrow \alpha_n \rightarrow ([\alpha_1], \ldots, [\alpha_n]) \\ &\rightarrow ([\alpha_1], \ldots, [\alpha_n]) \\ \text{scan2} &:: (x_1, \ldots, x_n, e_1, \ldots, e_n) \\ \text{redomap2} &:: (l, l_m, x_1, \ldots, x_n, e_1, \ldots, e_n) \end{align*} \] Figure 6. Second-order tuple-combinators 3. Fusion: A Structural-Analysis Transformation The fusion transformation, presented in this section, assumes a normalized program, i.e., by running the transformations introduced in Section 2 to obtain the properties listed in Section 2.3 Structural analysis is rooted in the $T_1$-$T_2$ transformation [1] depicted in Figure 7. If repeated application of $T_1$ and $T_2$ to a control-flow graph (CFG) results in one point, then the CFG is said to be reducible, i.e., the code can be re-written using only regular (while) loops, if and goto-free statements (with function calls). Data-flow optimizations on reducible CFGs can be modeled via equations that are applied at each $T_1$/$T_2$ reduction, and consequently only one CFG pass is required instead of a fixed-point iteration. In practice, if the CFG is known to be reducible, then analysis can be conveniently performed source to source: data-flow equations are associated directly with the language constructs and dictate, for example, how the analysis result is initialized at statement level, composed between consecutive statements, merged across branches, aggregated across loops, and translated across call sites. (An example of such non-trivial analysis is the summarization [29] of array references into read-write, write-first and read-only sets, expressions, used in the context of array-SSA and autotparallization." Since $L_0$ guarantees a reducible $CFG$, fusion is implemented as an intra-procedural source-to-source transformation: a bottom-up traversal of the abstract-syntax tree (ABSYN) builds the “fusion kernels” and a second pass substitutes them in the AB$\dddot{S}YN$ and cleans up the code. Such an approach is not uncommon. What is less common is that the data-flow equations themselves model $T_2$-like reducibility of the data-dependency graph. The remaining of this section is organized as follows: Section 3.1 gives the gist of the technique, and shows several don’t-fuse cases, which would either lead to illegal programs or to duplicated computation. Section 3.2 presents the data structures and data-flow rules of the first analysis pass for several of the $L_0$ constructs. Section 3.3 discusses the central data-flow rule that merges a second-order array combinator (SOAC) to the fusion result. Finally, Section 3.4 presents the compositional algebra under which SOACs are fused, and briefly discusses the second analysis pass. 3.1 Motivation and Intuitive Solution Figure 8 depicts the intuitive idea on which our fusion analysis is based. The top-left figure shows the dependency graph of a simple program, where an arrow points from the consumer to the producer. For simplicity, array variables are used only as input or output to SOAC calls and the control flow is trivial, i.e., a basic block. The main point is that all SOACs that appear inside the box labeled Kernel 3 can be fused without duplicating any computation, even if several of the to-be-fused arrays are used in different SOACs, e.g., $y_1$ is used to compute both $(z_1, z_2)$ and $res^2$. This is accomplished by means of $T_2$ reduction on the dependency graph: - The rightmost child, i.e., map2, of the root SOAC has only one incoming edge, hence it can be fused (reduced). This is achieved (i) by replacing in the root SOAC the child’s output with the child’s input arrays, (ii) by inserting in the root’s lambda a call to the child’s lambda, which computes the per-element output of the child, and, finally, (iii) by removing the duplicate input arrays of the resulting SOAC. The latter introduces copy statements for all but one of the (former) arguments of the lambda corresponding to the same duplicated array (and removes those former arguments). The top-right part of Figure 8 shows in blue the (optimized) result of the first fusion, where the copy statements have been eliminated by copy propagation. In the new graph, the leftmost child of the root, i.e., the one computing $(z_1, z_2)$, has only one incoming edge and can be fused. The resulting graph, shown in the 3. Fusing across a loop (or a SOAC's lambda) would duplicate computation and potentially change the time complexity of the program, because the loop-invariant computation of the producer would be redundantly executed loop-count times. 4. If a SOAC-produced array $x$ is consumed by two other SOACs at two program points located on the same execution path $p$ then the fused program would compute $x$ twice on $p$. However, if the two program points are located on disjoint execution paths then fusion is allowed. For example, if the map2 computing $z$ is removed, then fusing $x$ producer in the then and else branches does not duplicate computation. 5. If a SOAC-produced array $x$ is used other than input to another SOAC then fusing $x$ is disallowed in our implementation. A future extension might handle the “negligible-overhead” cases, e.g., substituting $x[i]$ with $f(a[i])$ would allow the fusion of $x$ at the cost of computing $f(arr[i])$ twice. 6. Finally, fusing two arrays produced by the same SOAC each in another SOAC still duplicates computation. This case can sometimes be optimized by horizontally fusing the SOAC consumers, e.g., the two reduce2s are merged in reduce2(g, 0, 1, $x, y$), where $g(x, y) \equiv (x, x, o^2 y)$. We conclude this section with two remarks: First, the data-dependency graph (DDG) does not have to be built since a bottom-up traversal of the program, i.e., backwards analysis, is guaranteed to encounter the statements in an order that satisfies the DDG. Second, the intuition is not complete, as it does not solve the issues described in the “don’t fuse” cases, e.g., violation of the “in-place update” semantics, handling fusion across loops and branches, etc. The next two sections present in detail the backward-analysis pass. ### 3.2 Data Structures and Data-Flow Rules for Fusion The $L_0$ compiler is implemented in Haskell. Figure 10 shows the structure of the data-flow result of the bottom-up analysis pass, i.e., a synthesized attribute. A fused kernel, FusedKer, consists of: - the SOAC statement, soacStmt, which pairs up the output arrays of a fused SOAC with its SOAC's (ABSYN) expression, - the set of input arrays, inp, of the fused SOAC, - a set of variable names, inplace, which is the union of the alias sets of all arrays that may have been “consumed” on any execution path between the currently-analyzed program point to the one where the fused SOAC was called; e.g., in Case 1 of Figure 9, a belongs to the inplace set of the kernel associated with output $y$, when analysis reaches the definition of $x$. - a set of array variables, fused_vars, that have been fused in the construction of the current kernel; if fused_vars = 0 then no fusion has taken place, and soacStmt remains in the program. The analysis result is implemented by the FusionRes structure: - outArr maps an array name to the kernel (name) producing it, - inpArr maps an array name $x$ to the set of kernels (names) whose corresponding SOACs receive $x$ as an input array, - a set of array names that the analysis up to the current program point have discovered to be un fusable, because of one of the cases 3 to 6 in Figure 9, e.g., arrays used other than SOAC input or in two different kernels that share an execution path, etc., and - ker maps a kernel name to its associated FusedKer data. --- ### Figure 8. Fusion By T2 Transformation on the Dependency Graph <table> <thead> <tr> <th>x1 = map2(f0, x2)</th> <th>x3 = map2(f, x1)</th> </tr> </thead> <tbody> <tr> <td>(y1, y2, y3) = map2(f1, x1, x2)</td> <td>(z1, z2) = map2(f2, y1, y2)</td> </tr> <tr> <td>(q1, q2) = map2(g, z1, z2)</td> <td></td> </tr> </tbody> </table> (Fusable) Kernel 3 <table> <thead> <tr> <th>x1 = map2(f0, x2)</th> <th>x3 = map2(f, x1)</th> </tr> </thead> <tbody> <tr> <td>(y1, y2, y3) = map2(fn real (real x1i, real x2i) =&gt;</td> <td></td> </tr> <tr> <td>(z1, z2) = (q1, q2) = map2(fn real (real y1i, real y2i) =&gt;</td> <td></td> </tr> </tbody> </table> (Fusable) Kernel 3 ### Figure 9. Don’t Fuse Cases: Illegal or Duplicates Computation bottom-left figure can be fused again resulting in the bottom-right graph of Figure 8. At this point no further T2 reduction is possible because the SOAC computing $x$ has two incoming edges. When no T2 reduction is possible a new kernel is started, e.g., Kernel 1. Having presented the intuitive idea, we look next at six cases, depicted in Figure 9, where fusion is disallowed because it would result either in incorrect programs or in duplicated computation: 1. A SOAC $a$ cannot be fused across the “in-place update” of an array if $a$ uses any variable in the alias set of $a$. Otherwise, fusion would violate $L_0$’s semantics because an alias of $a$ is used on an execution path following a’s in-place update. 2. Not all combinations of SOACs are fusible. For example, a map whose input arrays are produced by two filter operations can be fused as a loop, but this is not useful due to the sequential nature of the resulting loop, i.e., it uses two induction variables without closed-form solutions. Fusion assumes a normalized program, i.e., seen as a set of functions whose bodies are composed from blocks of let-statements, if's and loops. An array is consumed either (i) when it is the source of an in-place update or (ii) when it is passed to a function call as a parameter of unique type. Figure 10. Data Structures for Fusion's Result and Environment allOutArrs :: [Name] -> FusionM [Name] allOutArrs = -- allOutArrs [x] -> [x,y] if let (x,y) = map2(f,a) foldM (\y -> do do bnd <- asks $ M.lookup nm . soacsEnv Nothing -> return (nm:y) Just (s,_) -> return (s*y)) composeRes :: FusionRes -> FusionRes -> FusionM FusionRes composeRes res1 res2 = do let unfusable = unfusable res1 'S.union' unfusable res2 inpArr1 <- (allOutArrs . M.keys . inpArr) res1 inpArr2 <- (allOutArrs . M.keys . inpArr) res2 return $ composeRes ( outArr r1 'M.union' outArr res2) ( M.unionWith $ S.union (inpArr res1) (inpArr res2) ) (ufus 'S.union' (inpArr1 'S.intersection' inpArr2)) ( kernels res1 'M.union' kernels res2) unionRes :: FusionRes -> FusionRes -> FusionM FusionRes unionRes res1 res2 = do return $ unionRes r' <- bindBothEnv res1 res2 r'' <- fusion1 r' lam tryFuseSOAC (freeInBody fun) r'' (pat, rhs) -- similar for reduce2, filter2, redomap2. -- Replicate has a specialized impl (not shown) _ -> do r' <- bindBothEnv pat $ fusion1 r body fusion1 r' e Figure 11. Composing two regions on the same and disjoint paths. We note that inpArr \ unfusable covers all the array variables defined in the program, except for the ones that are currently out of scope. The analysis uses an environment, FusionEnv, that records the set of array variables visible at the current program point, varsEnv, and a map that binds an array name to the SOAC statement producing it, soacsEnv. The environment is computed during the top-down AbSyn traversal, i.e., an inherited attribute. The computation takes place in the FusionM monad that makes the FusionEnv environment available via a Reader monad interface. Figure 11 shows the helper functions composeRes and unionRes that implement the data-flow equations for merging the results of two code regions on the same and disjoint control-flow paths, respectively. unionRes performs the (semantic) union of the results of the two branches, e.g., the union of the unfusable sets and the union of the fusion kernels, etc, because a SOAC can be fused in two kernels on separate branches without duplicating computation. composeRes is similar to unionRes, except that the intersection between the inpArr key sets becomes unfusable, i.e., because the arrays in the intersection may be used by two kernels on the same execution path, and hence further fusion of their producing SOACs may duplicate computation. Figure 12 summarizes the most-relevant data-flow rules of the first analysis pass. Function fusion1 provides the implementation, where the arguments represent the current data-flow result, denoted r, and an expression. If the expression is: - an array variable, Var id, then the variable’s name is added to the unfusable set of the result, since it corresponds to an array used other than as input to a SOAC, i.e., Case 5 in Figure 9. - an in-place update expression then (i) the new binding is added to the varEnv set of in-scope variables, (ii) the inplace field of each kernel in the current result is updated with the alias set of the consumed variable, and (iii) the contribution of sub-expressions is added to the data-flow result. A function call that “consumes” some of its parameters is treated similarly. This solves Case 1 of Figure 9. - an if-then-else then the data-flow result of the then and else branches are computed independently starting from a (fresh) null result, because no fusion is possible either across branches or with the kernels obtained from the expression following the if (visibility issues). The results, i.e., then, and else, correspond to disjoint paths and are composed via \( \text{unionRes} \). Finally, the original result \( r \) is updated with the contribution of the if condition, yielding \( \text{cond}_1 \), and the overall result is computed via \( \text{composeRes} \) since it corresponds to overlapping control-flow paths. This solves both issues of Case 4 in Figure 9. - a lambda then the result for the lambda’s body, \( r' \), is computed from a null result because any of the arrays defined inside the lambda are invisible outside. Since a lambda’s body is executed multiple times inside a SOAC then the data flow equation is semantically: \( \text{composeRes} \ r \ (\text{composeRes} \ r' \ r') \), i.e., the code regions corresponding to \( r \) and \( r' \) share an execution path. In particular, all variables used in the lambda’s body become unfusable, which prevents fusion in a lambda (or loop) from outside it, i.e., Case 3 in Figure 9. The actual implementation performs less computation and also filters out the variables that are invisible in the outer scope. A loop is treated similarly. - the let-binding of a map2, reduce2, redomap2 or filter2 statement, then (i) both environment variables, i.e., \( \text{soacsEnv} \) and \( \text{varsEnv} \), are updated with the new bindings, (ii) the body of the SOAC lambda is processed, e.g., the variables used inside lambda become unfusable, and (iii) \( \text{tryFuseSOAC} \) (see next Section 3.3) either fuses the SOAC or creates a new kernel. - an arbitrary let-binding then the \( \text{varsEnv} \) environment variable is updated with the new binding and the sub-expressions, i.e., body and \( e \), are processed recursively. \( \text{scan2} \) is processed in a similar fashion since it is not part of the fusion algebra. ### 3.3 Fusing One SOAC Figure 13 shows the Haskell pseudo-code that implements the central step of analysis: the data-flow rules for processing a SOAC statement, \( \text{tryFuseSOAC} \). Its parameters are: (i) the set of variables that are visible in the current scope and are used in the current-SOAC’s lambda, \( \text{lam-vars} \), (ii) the current data-flow result, \( \text{res} \), and (iii) the output arrays and the expression of the to-be-processed SOAC statement, \( (\text{out-nms}, \text{soac}) \). There are four conditions, all4\_ok, that have to be met for the current SOAC, denoted \( s_c \), to be fused with at least one kernel: 1. None of \( s_c \)’s output-array names, \( \text{out-nms} \), belong to the result’s unfusable set. 2. There exists some kernels, \( \text{to-fuse\_kers} \), found by look-ups in the result’s \( \text{inpArrs} \), whose input arrays belong to \( \text{out-nms} \). If both conditions are met then it is guaranteed (see Section 3.2) that all \( \text{to-fuse\_kers} \) are located on disjoint execution paths, and that none of them are in a loop or lambda that does not contain \( s_c \), i.e., Cases 2 to 6 of Figure 9 do not happen. 3. All kernels in \( \text{to-fuse\_kers} \) are compatible with \( s_c \) under the algebra depicted in Figure 14 of Section 3.4. Otherwise, if only some kernels are compatible, then computation will be duplicated since \( s_c \) cannot be removed from the program. 4. None of \( s_c \)’s variables, including the ones used in its lambda and as input arrays, belong to the \( \text{inplace} \) set of any kernels in \( \text{to-fuse\_kers} \). Otherwise, Case 1 of Figure 9 may apply. The next step is to update the unfusable set of the data-flow result. At this stage the variables used in the current SOAC \( s_c \) lambda have been already made unfusable. It remains to check whether any input array\(^5\) of \( s_c \), i.e., \( \text{inp\_nms} \), is also used in an existing kernel. If so, then it is used in at least two kernels that may share an execution path, and hence it is unfusable. We make two remarks: First, if an array in the input set of \( s_c \) was produced by \[ \text{tryFuseSOAC}: \ S.\text{Set Name} \rightarrow \text{FusionRes} \rightarrow \] \[ \text{vars} \in \text{SOAC's lambda, current result} \] \[ (\text{Name}, \text{Exp}) \rightarrow \text{FusionM FusionRes} \] \[ \text{SOAC stnt, result after fusion} \] \[ \text{tryFuseSOAC lam_vars res (out_nms, soac) } \] \[ \text{do inps_nms } \leftarrow \text{getInpArrsSOAC soac} \] \[ \text{-- e.g., } [x, y] \leftarrow \text{map2}(f, x, y) \] \[ \text{-- Conditions for fusion:} \] \[ \text{(i) none of out_nms belongs to the unfusable set} \] \[ \text{cond1} \equiv \text{not } (\ldots \text{S/member \ unfusable res}) \text{ out_nms} \] \[ \text{(ii) } \exists \text{ some kernels that use some of out_nms as inputs} \] \[ \text{let to_fuse_kms } = \text{getKersWithInpArrs res out_nms} \] \[ \text{let to_fuse_kers } = \] \[ \text{fromJust } \text{map} (\ldots \text{S/lookup kers res}) \text{ to_fuse_kms} \] \[ \text{let cond2 } = \text{not } (\ldots \text{null to_fuse_kers}) \] \[ \text{(iii) all kernels have to be compatible for fusion,} \] \[ \text{-- e.g., map2 o filter2 not supported} \] \[ \text{let cond3 } = \] \[ \text{all(isCompatibleKer out_nms soac) to_fuse_kers} \] \[ \text{(iv) fusion cannot move a use of an input array} \] \[ \text{-- past its in-place update} \] \[ \text{let used } = \] \[ \text{S.intersection } \text{S/lam_vars 'S/union} \text{S/fromList inps_nms} \] \[ \text{let cond4 } = \text{all (S/null . used . inplace) to_fuse_kers} \] \[ \text{let is_fusable } = \text{cond1 && cond4 && cond3 && cond4} \] \[ \text{-- Update Unfusable Set with the input-array names that} \] \[ \text{appear as input arrays in kernels } \notin \text{to_fuse\_kers,} \] \[ \text{-- since those is input to at least 2 distinct kernels.} \] \[\text{inp_nms } \leftarrow \text{allOutArrs inp_ids} \] \[ \text{-- e.g., } [x, y] \leftarrow \text{map2}(g, x) \] \[ \text{if (x, y) } = \text{map2}(f, a) \] \[ \text{let mod\_kers } = \text{if is_fusable then to\_fuse\_kms else []} \] \[ \text{let in2\_kers } = \text{filter (inpArrInRes res mod\_kers) inp_nms} \] \[ \text{let res' } = \text{res \{ unfusable } = \text{unfusable res 'S/union} \} \[ \text{S/fromList in2\_kers} \] \[ \text{if is_fusable} \] \[ \text{then } \ldots \text{ -- fuse current soac with all to\_fuse\_kers} \] \[ \text{-- update out/inpArr & kers fields of result} \] \[ \text{else } \ldots \text{ -- add a fresh kernel to the result, and} \] \[ \text{-- update the outArr and inpArr fields} \] Figure 13. Pseudo-code for Conservatively Fusing one SOAC another SOAC, \( s_2 \), then \( \text{inp\_nms} \) is extended with all output arrays of \( s_2 \); otherwise Case 6 of Figure 9 may apply. This is achieved by \( \text{allOutArrs} \) (shown in Figure 11) via look-ups in \( \text{soacsEnv} \). Second, an input array, \( x \), does not become unfusable if \( s_c \) can be fused and \( x \) is used only as input to the kernels with which \( s_c \) will be fused. In this case \( x \) would still be used only in kernels located on disjoint execution paths (This is implemented by filtering modulo kernels \( \text{mod\_kers} \) in the definition of \( \text{in2\_kers} \).) Finally, if any of the four fusion conditions are not met, then a new kernel is created; otherwise the current SOAC is fused with each of the kernels in \( \text{to-fuse\_kers} \). ### 3.4 Fusion’s Composition Rules and Second Analysis Pass The algebra under which fusion is performed is depicted in Figure 14: \( \text{scan2} \) is unfusable and \( \text{reduce2} \) and \( \text{redomap2} \) always start a new kernel. Since \( \text{replicate} \) is semantically a \( \text{map2} \) with a constant function, it can always be fused without duplicating computation, even inside loops and SOAC’s lambdas, except for the cases when it violates the in-place semantics, i.e., Case 1 of Figure 9. If the current SOAC, \( s_c \), is a \( \text{map2} \) then it can be fused with a \( \text{map2} \), \( \text{reduce2} \), or \( \text{redomap2} \) kernel, and produces a map2, 4. Discussion, Possible Extensions, and Statistics This section is structured as follows: Section 4.1 discusses how to solve certain cases where fusion is inhibited due to calls to some of the built-in functions such as `size`, `split`, etc. Section 4.2 presents two code transformations that are aimed at optimizing uses of `scan2` and `reduce2`. Since these transformations both enable and are allowed by fusion, we plan to incorporate them in our fusion analysis implementation in the near future. Section 4.3 shows fusion-related statistics from six benchmarks, which were generated by compiler instrumentation. 4.1 Fusion Hindrances We have seen that the structural analysis presented in the previous section may allow fusion even when a variable is used as (an) input (array) to several second-order array combinators (SOAC), such as `map2`, `reduce2`. However, often enough, fusion is impeded by a SOAC input array being used as argument to built-in functions such as `split`, transpose, size, and `assertZips`, etc. For example, ```plaintext let x = map2(f, a) in let n = size(a)/2 in let n = size(x)/2 in let (a1,a2)=split(n,a) in let (x1,x2)=split(n,x) in => let x1 = map2(f, a1) in let y = reduce2(g, x1,x2). . . ``` The use of `x` and `y` in `split` and `reduce2` in the left-hand-side code would inhibit fusion. The code on the right side suggests that a possible solution is to propagate the inhibitors as far up in the program as possible, e.g., `a` and `x` are the input and result of a `map2`, therefore they must have the same (outermost) size. Now `x1` and `x2` can be fused inside the `map2` producer of `y`. To overcome such cases, we have extended our analysis to associate call statements of inhibitor functions (for now `size` and `assertZip`) with the kernels that use them, for example, the argument of `size` as input, and by performing the necessary translations, e.g., `size(x) => size(a)`, at the time when `x` is fused. Figure 5 demonstrates the application of our analysis to a matrix multiplication implementation [22] that uses flat-parallelism. One can observe that (i) the obtained code resembles the common implementation, i.e., a `reduce ◦ map` inside two nested `map`, (ii) that all replicates have been eliminated by fusion, and (iii) that the `assertZips` have been moved as to not hinder fusion. 4.2 Possible Extensions: ISWIM/IWIM and REDFLAT The fusion algebra for `reduce2` and `scan2` is relatively poor, for example, `scan2` is not fused at all, while `reduce2` just starts a new kernel that, under fusion, becomes `redomap2`. This section presents three high-level transformations, named ISWIM, IWIM, and REDFLAT, that have `scan2` and `reduce2` as principal actors, and may either enable fusion or be enabled by fusion or both. The top part of Figure 16 shows the intuitive idea behind ISWIM: a `scan` operation on a matrix in which the binary associative operator is `zipWith ◦` has the same semantics as transposing the matrix, mapping each of the rows, i.e., former columns. // FLAT-PARALLELISM MATRIX MULTIPLICATION fun int redplus1([int] a) = reduce(op +, 0, a) fun [int] redplus2([[int]]) a = map (redplus1, a) fun [int] mul1([int] a, [int] b) = map (op *, zip(a, b)) fun [[int]] mul2([[int]], [[int]]) b = map1 mul1, zip(a, b)) fun [[int]] replin(int N, [int] a) = replicate(N, a) fun [int] matmultFun(int N, [int] x, [int] y) = let yt = transpose(y) in map (mul2, zip(ar, yt)) in // MATRIX MULTIPLICATION AFTER FUSION // FLAT-PARALLELISM MATRIX MULTIPLICATION fun int redplus1([int] a) = reduce(op +, 0, a) fun [int] redplus2([[int]]) a = map (redplus1, a) fun [int] mul1([int] a, [int] b) = map (op *, zip(a, b)) fun [[int]] mul2([[int]], [[int]]) b = map1 mul1, zip(a, b)) fun [[int]] replin(int N, [int] a) = replicate(N, a) fun [int] matmultFun(int N, [int] x, [int] y) = let yt = transpose(y) in map (mul2, zip(ar, yt)) in // MATRIX MULTIPLICATION AFTER FUSION // Interchanging Scan With Inner Maps (ISWIM) Example: transpose :: [[a]] -> [a] b = transpose(a) => a[i, i] => b[i, j] scan2 fn (real x, real y) = map2 (op *, x, y), (0.0, 0.0, 0.0, a) => transpose (map2 fn (real x) => scan2 (op +, 0.0, x), transpose(a)) // Generalization for Nested Map2 (similar for reduce2) map2 f, a, a => map2 (g, a, a) => map2 fn ([b], [b]) (a) => map2 (f, x, y) => // Fusing Across Transpose (Similar for Reshape/Flatten): let x = map2 fn (1.0-k, a) in // i.e., the map2 produced by ISWIM may be further fused. Figure 16. Interchange Scan With Inner Maps (ISWIM) Transform. // Arbitrary-Nested-Level Generalization of ISWIM // similarly Interchange Reduce with Inner Maps (IRWIM) scan2 fn ([..], [..]) in reduce2 (op +, 0, x, y) => // Reduce Flattening Transformation (RedFlat) y = redmap2' fn (0, a, a) => // FLATTENING TRANSFORMATION // Fusing Across Transpose (Similar for Reshape/Flatten): let x = map2 fn (1.0-k, a) in // i.e., the map2 produced by ISWIM may be further fused. Figure 16. Interchange Scan With Inner Maps (ISWIM) Transform. // Interchanging Scan With Inner Maps (ISWIM) Example: transpose :: [[a]] -> [a] b = transpose(a) => a[i, i] => b[i, j] scan2 fn (real x, real y) = map2 (op *, x, y), (0.0, 0.0, 0.0, a) => transpose (map2 fn (real x) => scan2 (op +, 0.0, x), transpose(a)) // Generalization for Nested Map2 (similar for reduce2) map2 f, a, a => map2 (g, a, a) => map2 fn ([b], [b]) (a) => map2 (f, x, y) => // Fusing Across Transpose (Similar for Reshape/Flatten): let x = map2 fn (1.0-k, a) in // i.e., the map2 produced by ISWIM may be further fused. Figure 16. Interchange Scan With Inner Maps (ISWIM) Transform. which says that a map that reduces each of the input-array elements, followed by a reduce with the same operator (and neutral element) has the semantics of reducing the original array in which the first two dimensions have been flattened. We plan to extend the fusion analysis to incorporate both transformations. We remark that all transformations presented in this paper, i.e., fusion, ISWIM/IRWIM, REDFLAT, are the result of the rich algebra exposed by the second-order array combinators, and they require a difficult implementation in an imperative context. For example, ISWIM interchanges an inner parallel loop (map) outwards across a sequential loop (scan). The fact that the inner loop is parallel is not in general sufficient to guarantee the correctness of loop interchange, albeit a parallel loop can always be interchanged inwards. 4.3 Fusion-Analysis Statistics Since the $L_0$ language is currently interpreted, we cannot measure directly at this point the effectiveness of our implementation of fusion analysis, which comprises about 1000 lines of Haskell code. We have instrumented the $L_0$ compiler to keep track of how often and what types of SOAC it successfully fuses. The results are reproduced in Figure 18. We count as “interesting” those fusions in which the number of tuple elements produced by the producer is less than what is used by the consumer. The test programs are: - **P0**: a real-world pricing kernel for financial derivatives [26]. - **P1**: a real-world market calibration kernel that computes some of the parameters of P0. - **P2**: a real-world kernel for stochastic volatility calibration via Crank-Nicolson finite differences solver [25]. - **P3**: a flat-parallelism, array-based implementation of the shortest-path algorithm (whose shape resembles matrix multiplication), - **P4**: the flat-parallelism implementation of matrix multiplication shown in Figure 15, i.e., similar to the one of REPA [22]. - **P5**: an implementation of the maximal segment sum problem MSSP. The results indicate that the (redo)map ◦ map reductions are the most common, and that there are a significant number of reductions involving replicate. The reason for the latter case is that replicate is often used to match either the array sizes, as in the case of the flat-parallelism style matrix multiplication, or the result type of a reduce. Our aggressive fusion of replicate eliminates these inefficiencies, e.g., it transforms the flat-parallel matrix multiplication to the typical three-level nest of two maps and a redomap. There is only one redomap ◦ filter reduction, appearing in the Sobol random-number generator, albeit an important one: It would increase the parallelism degree by a factor of $32$, i.e., the size of the input array, and would also allow efficient computation on GPGPU, i.e., a segmented reduce of a power-of-two size requires only local barriers, rather than multiple execution of a GPU kernel. Finally, benchmark P0 presents a case where the application of ISWIM would have a significant impact: ISWIM would provide an extra dimension of exploitable parallelism of size $365$ on a data set that starves for additional parallelism. 5. Related Work Loop fusion is an old technique, dating back at least to the seventies [14], with the treatment of loop fusion in a parallel setting being covered in [24]. In imperative languages, the word “fusion” typically does not refer to producer-consumer fusion, but to a complementary technique, in which two sequential loops that do not depend on each other can fuse into a single loop. Single Assignment C [18] incorporated this in a functional language. The ideas behind a language algebra date back to the very beginnings of functional programming [3], and an algebra that is a subset of ours was presented in [8]. In general, functional language compilers focused on removing intermediate data structures via a structural technique called deforestation, which also performs certain kinds of fusion[17]. Data-Parallel Haskell (DPhP) [13] makes use of aggressive inlining and rewrite rules to perform fusion, including expressing array operations in terms of streams [16], which have previously been shown to be easily fusible. While DPH obtains good results, rewrite rules are quite limited – they are an inherently local view of the computation, and would be unable to cope with limitations in the presence of in-place array updates, and whether the result of an array operation is used multiple times. The Glasgow Haskell Compiler itself also bases its list fusion on rewrite rules and cross-module inlining [21]. The Repa [22] approach to fusion is based on a delayed representation of arrays, which models an array as a function from index to value. With this representation, fusion happens automatically through function composition, although this can cause duplication of work in many cases. To counteract this, Repa lets the user force an array, by which it is converted from the delayed representation to a traditional sequence of values. The pull arrays of Obsidian [15] use a similar mechanism. Accelerate [23] uses an elaboration of the delayed arrays representation from Repa, and in particular manages to avoid duplicating work. All array operations have a uniform representation as constructors for delayed arrays, on which fusion is performed by tree contraction. Accelerate supports multiple arrays as input to the same array operation (using a zipWith construct). Although arrays are usually used at least twice (once for getting the size, once for the data), it does not seem that they can handle the difficult case where the output of an array operation is used as input to two other array operations. NESL has been extended with a GPU backend [6], for which the authors note that fusion is critical to the performance of the flattened program. Their approach is to use a form of copy-propagation on the intermediary code, and lift the resulting functions to work on entire arrays. Their approach only works for what we would term map ◦ map fusion, however. Our uniqueness attributes have some similarities to the “owning pointers” found in the impure language Rust [20], albeit there are deep differences. In Rust, owning pointers are used to manage memory – when an owning pointer goes out of scope, the memory it points to is deallocated – while we use uniqueness attributes to handle side effects. In addition, we allow function calls to consume arrays passed as unique-type parameters, whereas in Rust this causes a deep copy of the object referenced by the owning pointer. A closer similarity is found in the pure functional language Clean, which contains a sophisticated system of uniqueness typing [5]. Clean employs uniqueness typing to re-use memory in cases where a function receives a unique argument, but also (and perhaps more importantly) to control side effects including arbitrary I/O. As in $L_0$, alias analysis is used to ensure that uniqueness properties are not violated. A notable difference is that the Clean language itself does not have any facilities for consuming unique objects, apart from specifying a function parameter as unique, but delegate this to (unsafe) internal functions, that are exposed safely via the type system. Furthermore, a unique return value in Clean may alias some of the parameters to the function, which is forbidden in LC0. We have found that this greatly simplifies analysis, and allows it to be fully intraprocedural. 6. Conclusions and Future Work Previous work on fusion has taken two main directions: either fusion is performed aggressively, and the programmer is provided primitives to inhibit fusion, e.g., forcing array to materialize, or fusion is performed via rewriting rules. The latter approach relies tightly on the inliner engine and its applicability is limited to the case when each fused array is consumed by one array combinator. This paper has presented a program-level, structural-analysis approach to fusion that handles the difficult case in which an array produced by a second-order array combinator (SOAC), such as map, is consumed by several other SOACs (if the SOAC producer-consumer dependency graph is reducible.) This essentially allows fusion to operate across zip/unzip. Furthermore, we have shown a compositional algebra for fusion that includes array combinators, such as map, reduce, filter, and redomap, and other built-in functions that would otherwise hinder fusion applicability, such as size, split, transpose, etc. Finally, we have discussed two transformation, ISWIM and RedFLAT, that optimize some important uses of scan and reduce, and that can both enable and be enabled by fusion. Acknowledgments This research has been partially supported by the Danish Strategic Research Council, Program Committee for Strategic Growth Technologies, for the research center ‘HIPERFIT: Functional High Performance Computing for Financial Information Technology’ (http://hiperfit.dk) under contract number 10-092299. References
{"Source-Url": "http://hiperfit.dk/pdf/fhpc13_troels.pdf", "len_cl100k_base": 16113, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 50984, "total-output-tokens": 19065, "length": "2e13", "weborganizer": {"__label__adult": 0.0003325939178466797, "__label__art_design": 0.0003631114959716797, "__label__crime_law": 0.0002524852752685547, "__label__education_jobs": 0.0004134178161621094, "__label__entertainment": 6.568431854248047e-05, "__label__fashion_beauty": 0.000141143798828125, "__label__finance_business": 0.0002949237823486328, "__label__food_dining": 0.00036215782165527344, "__label__games": 0.0005965232849121094, "__label__hardware": 0.0010881423950195312, "__label__health": 0.00037169456481933594, "__label__history": 0.00024437904357910156, "__label__home_hobbies": 9.27448272705078e-05, "__label__industrial": 0.0004582405090332031, "__label__literature": 0.0002419948577880859, "__label__politics": 0.0002925395965576172, "__label__religion": 0.0004417896270751953, "__label__science_tech": 0.0225372314453125, "__label__social_life": 6.109476089477539e-05, "__label__software": 0.004604339599609375, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.0002770423889160156, "__label__transportation": 0.0005831718444824219, "__label__travel": 0.0001983642578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65978, 0.02194]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65978, 0.59867]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65978, 0.84318]], "google_gemma-3-12b-it_contains_pii": [[0, 5372, false], [5372, 12498, null], [12498, 18912, null], [18912, 24979, null], [24979, 29743, null], [29743, 34952, null], [34952, 38599, null], [38599, 46643, null], [46643, 49679, null], [49679, 52304, null], [52304, 59299, null], [59299, 65978, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5372, true], [5372, 12498, null], [12498, 18912, null], [18912, 24979, null], [24979, 29743, null], [29743, 34952, null], [34952, 38599, null], [38599, 46643, null], [46643, 49679, null], [49679, 52304, null], [52304, 59299, null], [59299, 65978, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65978, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65978, null]], "pdf_page_numbers": [[0, 5372, 1], [5372, 12498, 2], [12498, 18912, 3], [18912, 24979, 4], [24979, 29743, 5], [29743, 34952, 6], [34952, 38599, 7], [38599, 46643, 8], [46643, 49679, 9], [49679, 52304, 10], [52304, 59299, 11], [59299, 65978, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65978, 0.01702]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0d4ecbcfbc78ec49ff595c39b012f39eabff4a04
[REMOVED]
{"len_cl100k_base": 8196, "olmocr-version": "0.1.48", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 49994, "total-output-tokens": 11646, "length": "2e13", "weborganizer": {"__label__adult": 0.0021343231201171875, "__label__art_design": 0.0016527175903320312, "__label__crime_law": 0.0025348663330078125, "__label__education_jobs": 0.0034008026123046875, "__label__entertainment": 0.0011949539184570312, "__label__fashion_beauty": 0.0013790130615234375, "__label__finance_business": 0.0012960433959960938, "__label__food_dining": 0.002475738525390625, "__label__games": 0.30615234375, "__label__hardware": 0.003971099853515625, "__label__health": 0.0028972625732421875, "__label__history": 0.001946449279785156, "__label__home_hobbies": 0.0004470348358154297, "__label__industrial": 0.002498626708984375, "__label__literature": 0.0015497207641601562, "__label__politics": 0.0014066696166992188, "__label__religion": 0.00199127197265625, "__label__science_tech": 0.26806640625, "__label__social_life": 0.00033283233642578125, "__label__software": 0.01128387451171875, "__label__software_dev": 0.375244140625, "__label__sports_fitness": 0.003108978271484375, "__label__transportation": 0.0020618438720703125, "__label__travel": 0.0008578300476074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41108, 0.04014]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41108, 0.52937]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41108, 0.91409]], "google_gemma-3-12b-it_contains_pii": [[0, 2121, false], [2121, 5289, null], [5289, 6901, null], [6901, 9643, null], [9643, 12360, null], [12360, 13908, null], [13908, 16692, null], [16692, 19539, null], [19539, 20788, null], [20788, 23856, null], [23856, 26717, null], [26717, 29582, null], [29582, 32217, null], [32217, 32578, null], [32578, 33729, null], [33729, 34048, null], [34048, 34331, null], [34331, 34370, null], [34370, 36589, null], [36589, 38764, null], [38764, 41108, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2121, true], [2121, 5289, null], [5289, 6901, null], [6901, 9643, null], [9643, 12360, null], [12360, 13908, null], [13908, 16692, null], [16692, 19539, null], [19539, 20788, null], [20788, 23856, null], [23856, 26717, null], [26717, 29582, null], [29582, 32217, null], [32217, 32578, null], [32578, 33729, null], [33729, 34048, null], [34048, 34331, null], [34331, 34370, null], [34370, 36589, null], [36589, 38764, null], [38764, 41108, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41108, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41108, null]], "pdf_page_numbers": [[0, 2121, 1], [2121, 5289, 2], [5289, 6901, 3], [6901, 9643, 4], [9643, 12360, 5], [12360, 13908, 6], [13908, 16692, 7], [16692, 19539, 8], [19539, 20788, 9], [20788, 23856, 10], [23856, 26717, 11], [26717, 29582, 12], [29582, 32217, 13], [32217, 32578, 14], [32578, 33729, 15], [33729, 34048, 16], [34048, 34331, 17], [34331, 34370, 18], [34370, 36589, 19], [36589, 38764, 20], [38764, 41108, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41108, 0.05814]]}
olmocr_science_pdfs
2024-11-25
2024-11-25